=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run: kubectl --context addons-659513 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run: kubectl --context addons-659513 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run: kubectl --context addons-659513 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [33f3ec72-704c-4201-8ff2-47eac4b359fe] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [33f3ec72-704c-4201-8ff2-47eac4b359fe] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004493789s
I1221 19:49:17.955805 126345 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run: out/minikube-linux-amd64 -p addons-659513 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-659513 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.596557913s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run: kubectl --context addons-659513 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run: out/minikube-linux-amd64 -p addons-659513 ip
addons_test.go:301: (dbg) Run: nslookup hello-john.test 192.168.39.164
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-659513 -n addons-659513
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-amd64 -p addons-659513 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-659513 logs -n 25: (1.261650008s)
helpers_test.go:261: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-836309 │ download-only-836309 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
│ start │ --download-only -p binary-mirror-061430 --alsologtostderr --binary-mirror http://127.0.0.1:41125 --driver=kvm2 --container-runtime=crio │ binary-mirror-061430 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ │
│ delete │ -p binary-mirror-061430 │ binary-mirror-061430 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
│ addons │ enable dashboard -p addons-659513 │ addons-659513 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ │
│ addons │ disable dashboard -p addons-659513 │ addons-659513 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ │
│ start │ -p addons-659513 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-659513 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:48 UTC │
│ addons │ addons-659513 addons disable volcano --alsologtostderr -v=1 │ addons-659513 │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │ 21 Dec 25 19:48 UTC │
│ addons │ addons-659513 addons disable gcp-auth --alsologtostderr -v=1 │ addons-659513 │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │ 21 Dec 25 19:48 UTC │
│ addons │ enable headlamp -p addons-659513 --alsologtostderr -v=1 │ addons-659513 │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │ 21 Dec 25 19:48 UTC │
│ addons │ addons-659513 addons disable yakd --alsologtostderr -v=1 │ addons-659513 │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │ 21 Dec 25 19:49 UTC │
│ addons │ addons-659513 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-659513 │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │ 21 Dec 25 19:48 UTC │
│ addons │ addons-659513 addons disable headlamp --alsologtostderr -v=1 │ addons-659513 │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
│ addons │ addons-659513 addons disable metrics-server --alsologtostderr -v=1 │ addons-659513 │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
│ ip │ addons-659513 ip │ addons-659513 │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
│ addons │ addons-659513 addons disable registry --alsologtostderr -v=1 │ addons-659513 │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
│ ssh │ addons-659513 ssh cat /opt/local-path-provisioner/pvc-7cf3985a-8a2e-4729-b39d-80336e9e7676_default_test-pvc/file1 │ addons-659513 │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
│ addons │ addons-659513 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-659513 │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
│ addons │ addons-659513 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-659513 │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
│ ssh │ addons-659513 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-659513 │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-659513 │ addons-659513 │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
│ addons │ addons-659513 addons disable registry-creds --alsologtostderr -v=1 │ addons-659513 │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
│ addons │ addons-659513 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-659513 │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
│ addons │ addons-659513 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-659513 │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
│ addons │ addons-659513 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-659513 │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
│ ip │ addons-659513 ip │ addons-659513 │ jenkins │ v1.37.0 │ 21 Dec 25 19:51 UTC │ 21 Dec 25 19:51 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/21 19:46:26
Running on machine: ubuntu-20-agent-13
Binary: Built with gc go1.25.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1221 19:46:26.793172 127170 out.go:360] Setting OutFile to fd 1 ...
I1221 19:46:26.793463 127170 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:46:26.793474 127170 out.go:374] Setting ErrFile to fd 2...
I1221 19:46:26.793483 127170 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:46:26.793680 127170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
I1221 19:46:26.794185 127170 out.go:368] Setting JSON to false
I1221 19:46:26.795005 127170 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":12537,"bootTime":1766333850,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1221 19:46:26.795063 127170 start.go:143] virtualization: kvm guest
I1221 19:46:26.797090 127170 out.go:179] * [addons-659513] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1221 19:46:26.798220 127170 notify.go:221] Checking for updates...
I1221 19:46:26.798230 127170 out.go:179] - MINIKUBE_LOCATION=22179
I1221 19:46:26.799686 127170 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1221 19:46:26.801148 127170 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
I1221 19:46:26.802447 127170 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
I1221 19:46:26.803877 127170 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1221 19:46:26.805107 127170 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1221 19:46:26.806426 127170 driver.go:422] Setting default libvirt URI to qemu:///system
I1221 19:46:26.836097 127170 out.go:179] * Using the kvm2 driver based on user configuration
I1221 19:46:26.837263 127170 start.go:309] selected driver: kvm2
I1221 19:46:26.837290 127170 start.go:928] validating driver "kvm2" against <nil>
I1221 19:46:26.837311 127170 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1221 19:46:26.838320 127170 start_flags.go:329] no existing cluster config was found, will generate one from the flags
I1221 19:46:26.838668 127170 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1221 19:46:26.838708 127170 cni.go:84] Creating CNI manager for ""
I1221 19:46:26.838763 127170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1221 19:46:26.838775 127170 start_flags.go:338] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1221 19:46:26.838827 127170 start.go:353] cluster config:
{Name:addons-659513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-659513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1221 19:46:26.838951 127170 iso.go:125] acquiring lock: {Name:mk32aed4917b82431a8f5160a35db6118385a2c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1221 19:46:26.841173 127170 out.go:179] * Starting "addons-659513" primary control-plane node in "addons-659513" cluster
I1221 19:46:26.842551 127170 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
I1221 19:46:26.842591 127170 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-122429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
I1221 19:46:26.842604 127170 cache.go:65] Caching tarball of preloaded images
I1221 19:46:26.842674 127170 preload.go:251] Found /home/jenkins/minikube-integration/22179-122429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1221 19:46:26.842684 127170 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
I1221 19:46:26.843014 127170 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/config.json ...
I1221 19:46:26.843040 127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/config.json: {Name:mk4cab2001293abff638904bb7d40fa859a87d0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1221 19:46:26.843204 127170 start.go:360] acquireMachinesLock for addons-659513: {Name:mkd449b545e9165e82ce02652c0c22eb5894063b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1221 19:46:26.843292 127170 start.go:364] duration metric: took 51.662µs to acquireMachinesLock for "addons-659513"
I1221 19:46:26.843318 127170 start.go:93] Provisioning new machine with config: &{Name:addons-659513 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:addons-659513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
I1221 19:46:26.843371 127170 start.go:125] createHost starting for "" (driver="kvm2")
I1221 19:46:26.844998 127170 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1221 19:46:26.845166 127170 start.go:159] libmachine.API.Create for "addons-659513" (driver="kvm2")
I1221 19:46:26.845195 127170 client.go:173] LocalClient.Create starting
I1221 19:46:26.845325 127170 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem
I1221 19:46:26.864968 127170 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/cert.pem
I1221 19:46:26.904168 127170 main.go:144] libmachine: creating domain...
I1221 19:46:26.904189 127170 main.go:144] libmachine: creating network...
I1221 19:46:26.905574 127170 main.go:144] libmachine: found existing default network
I1221 19:46:26.905845 127170 main.go:144] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1221 19:46:26.906347 127170 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c5c7f0}
I1221 19:46:26.906475 127170 main.go:144] libmachine: defining private network:
<network>
<name>mk-addons-659513</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1221 19:46:26.912248 127170 main.go:144] libmachine: creating private network mk-addons-659513 192.168.39.0/24...
I1221 19:46:26.981031 127170 main.go:144] libmachine: private network mk-addons-659513 192.168.39.0/24 created
I1221 19:46:26.981316 127170 main.go:144] libmachine: <network>
<name>mk-addons-659513</name>
<uuid>60972456-5ec1-4ea6-b8f1-c69c8ff211b5</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:a0:9a:4e'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1221 19:46:26.981352 127170 main.go:144] libmachine: setting up store path in /home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513 ...
I1221 19:46:26.981383 127170 main.go:144] libmachine: building disk image from file:///home/jenkins/minikube-integration/22179-122429/.minikube/cache/iso/amd64/minikube-v1.37.0-1766254259-22261-amd64.iso
I1221 19:46:26.981396 127170 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22179-122429/.minikube
I1221 19:46:26.981515 127170 main.go:144] libmachine: Downloading /home/jenkins/minikube-integration/22179-122429/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22179-122429/.minikube/cache/iso/amd64/minikube-v1.37.0-1766254259-22261-amd64.iso...
I1221 19:46:27.232993 127170 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa...
I1221 19:46:27.287717 127170 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/addons-659513.rawdisk...
I1221 19:46:27.287759 127170 main.go:144] libmachine: Writing magic tar header
I1221 19:46:27.287820 127170 main.go:144] libmachine: Writing SSH key tar header
I1221 19:46:27.287910 127170 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513 ...
I1221 19:46:27.287977 127170 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513
I1221 19:46:27.288000 127170 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513 (perms=drwx------)
I1221 19:46:27.288011 127170 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22179-122429/.minikube/machines
I1221 19:46:27.288021 127170 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22179-122429/.minikube/machines (perms=drwxr-xr-x)
I1221 19:46:27.288031 127170 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22179-122429/.minikube
I1221 19:46:27.288043 127170 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22179-122429/.minikube (perms=drwxr-xr-x)
I1221 19:46:27.288051 127170 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22179-122429
I1221 19:46:27.288059 127170 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22179-122429 (perms=drwxrwxr-x)
I1221 19:46:27.288071 127170 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1221 19:46:27.288079 127170 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1221 19:46:27.288091 127170 main.go:144] libmachine: checking permissions on dir: /home/jenkins
I1221 19:46:27.288098 127170 main.go:144] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1221 19:46:27.288108 127170 main.go:144] libmachine: checking permissions on dir: /home
I1221 19:46:27.288115 127170 main.go:144] libmachine: skipping /home - not owner
I1221 19:46:27.288122 127170 main.go:144] libmachine: defining domain...
I1221 19:46:27.289527 127170 main.go:144] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-659513</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/addons-659513.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-659513'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1221 19:46:27.294901 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:ec:79:39 in network default
I1221 19:46:27.295454 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:27.295475 127170 main.go:144] libmachine: starting domain...
I1221 19:46:27.295480 127170 main.go:144] libmachine: ensuring networks are active...
I1221 19:46:27.296173 127170 main.go:144] libmachine: Ensuring network default is active
I1221 19:46:27.296561 127170 main.go:144] libmachine: Ensuring network mk-addons-659513 is active
I1221 19:46:27.297119 127170 main.go:144] libmachine: getting domain XML...
I1221 19:46:27.298188 127170 main.go:144] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-659513</name>
<uuid>536fbf62-98e1-4d4f-bd81-908693d32210</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/addons-659513.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:56:ba:4c'/>
<source network='mk-addons-659513'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:ec:79:39'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1221 19:46:28.562848 127170 main.go:144] libmachine: waiting for domain to start...
I1221 19:46:28.564397 127170 main.go:144] libmachine: domain is now running
I1221 19:46:28.564420 127170 main.go:144] libmachine: waiting for IP...
I1221 19:46:28.565233 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:28.566189 127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
I1221 19:46:28.566211 127170 main.go:144] libmachine: trying to list again with source=arp
I1221 19:46:28.566604 127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
I1221 19:46:28.566665 127170 retry.go:84] will retry after 300ms: waiting for domain to come up
I1221 19:46:28.825289 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:28.826170 127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
I1221 19:46:28.826192 127170 main.go:144] libmachine: trying to list again with source=arp
I1221 19:46:28.826572 127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
I1221 19:46:29.132113 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:29.132909 127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
I1221 19:46:29.132924 127170 main.go:144] libmachine: trying to list again with source=arp
I1221 19:46:29.133254 127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
I1221 19:46:29.440831 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:29.441664 127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
I1221 19:46:29.441679 127170 main.go:144] libmachine: trying to list again with source=arp
I1221 19:46:29.442013 127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
I1221 19:46:30.049064 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:30.049962 127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
I1221 19:46:30.049986 127170 main.go:144] libmachine: trying to list again with source=arp
I1221 19:46:30.050303 127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
I1221 19:46:30.771349 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:30.772040 127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
I1221 19:46:30.772057 127170 main.go:144] libmachine: trying to list again with source=arp
I1221 19:46:30.772346 127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
I1221 19:46:31.380188 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:31.380858 127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
I1221 19:46:31.380876 127170 main.go:144] libmachine: trying to list again with source=arp
I1221 19:46:31.381158 127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
I1221 19:46:32.569722 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:32.570419 127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
I1221 19:46:32.570436 127170 main.go:144] libmachine: trying to list again with source=arp
I1221 19:46:32.570784 127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
I1221 19:46:33.752332 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:33.753007 127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
I1221 19:46:33.753030 127170 main.go:144] libmachine: trying to list again with source=arp
I1221 19:46:33.753325 127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
I1221 19:46:35.396839 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:35.397531 127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
I1221 19:46:35.397553 127170 main.go:144] libmachine: trying to list again with source=arp
I1221 19:46:35.397906 127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
I1221 19:46:37.263196 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:37.263996 127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
I1221 19:46:37.264018 127170 main.go:144] libmachine: trying to list again with source=arp
I1221 19:46:37.264386 127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
I1221 19:46:39.808909 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:39.809768 127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
I1221 19:46:39.809793 127170 main.go:144] libmachine: trying to list again with source=arp
I1221 19:46:39.810144 127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
I1221 19:46:43.197153 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:43.197994 127170 main.go:144] libmachine: domain addons-659513 has current primary IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:43.198007 127170 main.go:144] libmachine: found domain IP: 192.168.39.164
I1221 19:46:43.198015 127170 main.go:144] libmachine: reserving static IP address...
I1221 19:46:43.198446 127170 main.go:144] libmachine: unable to find host DHCP lease matching {name: "addons-659513", mac: "52:54:00:56:ba:4c", ip: "192.168.39.164"} in network mk-addons-659513
I1221 19:46:43.487871 127170 main.go:144] libmachine: reserved static IP address 192.168.39.164 for domain addons-659513
I1221 19:46:43.487901 127170 main.go:144] libmachine: waiting for SSH...
I1221 19:46:43.487926 127170 main.go:144] libmachine: Getting to WaitForSSH function...
I1221 19:46:43.491308 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:43.491976 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:minikube Clientid:01:52:54:00:56:ba:4c}
I1221 19:46:43.492014 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:43.492250 127170 main.go:144] libmachine: Using SSH client type: native
I1221 19:46:43.492525 127170 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil> [] 0s} 192.168.39.164 22 <nil> <nil>}
I1221 19:46:43.492540 127170 main.go:144] libmachine: About to run SSH command:
exit 0
I1221 19:46:43.600096 127170 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1221 19:46:43.600448 127170 main.go:144] libmachine: domain creation complete
I1221 19:46:43.601951 127170 machine.go:94] provisionDockerMachine start ...
I1221 19:46:43.604465 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:43.604895 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:minikube Clientid:01:52:54:00:56:ba:4c}
I1221 19:46:43.604918 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:43.605051 127170 main.go:144] libmachine: Using SSH client type: native
I1221 19:46:43.605252 127170 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil> [] 0s} 192.168.39.164 22 <nil> <nil>}
I1221 19:46:43.605261 127170 main.go:144] libmachine: About to run SSH command:
hostname
I1221 19:46:43.713386 127170 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
I1221 19:46:43.713416 127170 buildroot.go:166] provisioning hostname "addons-659513"
I1221 19:46:43.716267 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:43.716719 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:46:43.716740 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:43.716904 127170 main.go:144] libmachine: Using SSH client type: native
I1221 19:46:43.717154 127170 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil> [] 0s} 192.168.39.164 22 <nil> <nil>}
I1221 19:46:43.717167 127170 main.go:144] libmachine: About to run SSH command:
sudo hostname addons-659513 && echo "addons-659513" | sudo tee /etc/hostname
I1221 19:46:43.842661 127170 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-659513
I1221 19:46:43.845869 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:43.846339 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:46:43.846375 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:43.846591 127170 main.go:144] libmachine: Using SSH client type: native
I1221 19:46:43.846874 127170 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil> [] 0s} 192.168.39.164 22 <nil> <nil>}
I1221 19:46:43.846901 127170 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-659513' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-659513/g' /etc/hosts;
else
echo '127.0.1.1 addons-659513' | sudo tee -a /etc/hosts;
fi
fi
I1221 19:46:43.967319 127170 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1221 19:46:43.967373 127170 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22179-122429/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-122429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-122429/.minikube}
I1221 19:46:43.967392 127170 buildroot.go:174] setting up certificates
I1221 19:46:43.967405 127170 provision.go:84] configureAuth start
I1221 19:46:43.970186 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:43.970673 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:46:43.970700 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:43.973061 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:43.973397 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:46:43.973419 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:43.973567 127170 provision.go:143] copyHostCerts
I1221 19:46:43.973655 127170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-122429/.minikube/cert.pem (1123 bytes)
I1221 19:46:43.973767 127170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-122429/.minikube/key.pem (1679 bytes)
I1221 19:46:43.973880 127170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-122429/.minikube/ca.pem (1082 bytes)
I1221 19:46:43.973948 127170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-122429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca-key.pem org=jenkins.addons-659513 san=[127.0.0.1 192.168.39.164 addons-659513 localhost minikube]
I1221 19:46:44.165150 127170 provision.go:177] copyRemoteCerts
I1221 19:46:44.165219 127170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1221 19:46:44.167681 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:44.168037 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:46:44.168063 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:44.168177 127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
I1221 19:46:44.253956 127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1221 19:46:44.286179 127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1221 19:46:44.315895 127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1221 19:46:44.344867 127170 provision.go:87] duration metric: took 377.423662ms to configureAuth
I1221 19:46:44.344905 127170 buildroot.go:189] setting minikube options for container-runtime
I1221 19:46:44.345090 127170 config.go:182] Loaded profile config "addons-659513": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:46:44.348307 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:44.348787 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:46:44.348820 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:44.349069 127170 main.go:144] libmachine: Using SSH client type: native
I1221 19:46:44.349343 127170 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil> [] 0s} 192.168.39.164 22 <nil> <nil>}
I1221 19:46:44.349364 127170 main.go:144] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1221 19:46:44.594585 127170 main.go:144] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1221 19:46:44.594621 127170 machine.go:97] duration metric: took 992.652445ms to provisionDockerMachine
I1221 19:46:44.594638 127170 client.go:176] duration metric: took 17.74943249s to LocalClient.Create
I1221 19:46:44.594668 127170 start.go:167] duration metric: took 17.749500937s to libmachine.API.Create "addons-659513"
I1221 19:46:44.594680 127170 start.go:293] postStartSetup for "addons-659513" (driver="kvm2")
I1221 19:46:44.594694 127170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1221 19:46:44.594800 127170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1221 19:46:44.597899 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:44.598469 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:46:44.598507 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:44.598679 127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
I1221 19:46:44.683977 127170 ssh_runner.go:195] Run: cat /etc/os-release
I1221 19:46:44.689116 127170 info.go:137] Remote host: Buildroot 2025.02
I1221 19:46:44.689145 127170 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-122429/.minikube/addons for local assets ...
I1221 19:46:44.689208 127170 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-122429/.minikube/files for local assets ...
I1221 19:46:44.689231 127170 start.go:296] duration metric: took 94.544681ms for postStartSetup
I1221 19:46:44.700171 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:44.701649 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:46:44.701693 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:44.702013 127170 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/config.json ...
I1221 19:46:44.702241 127170 start.go:128] duration metric: took 17.858850337s to createHost
I1221 19:46:44.704326 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:44.704699 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:46:44.704719 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:44.704853 127170 main.go:144] libmachine: Using SSH client type: native
I1221 19:46:44.705034 127170 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil> [] 0s} 192.168.39.164 22 <nil> <nil>}
I1221 19:46:44.705043 127170 main.go:144] libmachine: About to run SSH command:
date +%s.%N
I1221 19:46:44.813375 127170 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766346404.776216256
I1221 19:46:44.813407 127170 fix.go:216] guest clock: 1766346404.776216256
I1221 19:46:44.813415 127170 fix.go:229] Guest: 2025-12-21 19:46:44.776216256 +0000 UTC Remote: 2025-12-21 19:46:44.702254752 +0000 UTC m=+17.956660930 (delta=73.961504ms)
I1221 19:46:44.813433 127170 fix.go:200] guest clock delta is within tolerance: 73.961504ms
I1221 19:46:44.813438 127170 start.go:83] releasing machines lock for "addons-659513", held for 17.970133873s
I1221 19:46:44.816517 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:44.816935 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:46:44.816962 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:44.817514 127170 ssh_runner.go:195] Run: cat /version.json
I1221 19:46:44.817571 127170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1221 19:46:44.820682 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:44.820916 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:44.821134 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:46:44.821170 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:44.821356 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:46:44.821354 127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
I1221 19:46:44.821387 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:44.821612 127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
I1221 19:46:44.919283 127170 ssh_runner.go:195] Run: systemctl --version
I1221 19:46:44.925829 127170 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1221 19:46:45.352278 127170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1221 19:46:45.359651 127170 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1221 19:46:45.359745 127170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1221 19:46:45.379127 127170 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1221 19:46:45.379155 127170 start.go:496] detecting cgroup driver to use...
I1221 19:46:45.379218 127170 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1221 19:46:45.399107 127170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1221 19:46:45.417418 127170 docker.go:218] disabling cri-docker service (if available) ...
I1221 19:46:45.417583 127170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1221 19:46:45.435868 127170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1221 19:46:45.452880 127170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1221 19:46:45.616657 127170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1221 19:46:45.845431 127170 docker.go:234] disabling docker service ...
I1221 19:46:45.845566 127170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1221 19:46:45.863463 127170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1221 19:46:45.882585 127170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1221 19:46:46.048711 127170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1221 19:46:46.191469 127170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1221 19:46:46.208719 127170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1221 19:46:46.232278 127170 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1221 19:46:46.232345 127170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1221 19:46:46.244533 127170 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1221 19:46:46.244622 127170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1221 19:46:46.256733 127170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1221 19:46:46.268577 127170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1221 19:46:46.280322 127170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1221 19:46:46.294060 127170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1221 19:46:46.306840 127170 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1221 19:46:46.327456 127170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1221 19:46:46.339741 127170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1221 19:46:46.350403 127170 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1221 19:46:46.350512 127170 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1221 19:46:46.370953 127170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1221 19:46:46.383590 127170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1221 19:46:46.532727 127170 ssh_runner.go:195] Run: sudo systemctl restart crio
I1221 19:46:46.738903 127170 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1221 19:46:46.739020 127170 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1221 19:46:46.744514 127170 start.go:564] Will wait 60s for crictl version
I1221 19:46:46.744594 127170 ssh_runner.go:195] Run: which crictl
I1221 19:46:46.748790 127170 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1221 19:46:46.785541 127170 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1221 19:46:46.785666 127170 ssh_runner.go:195] Run: crio --version
I1221 19:46:46.820669 127170 ssh_runner.go:195] Run: crio --version
I1221 19:46:46.906406 127170 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
I1221 19:46:46.915176 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:46.915594 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:46:46.915623 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:46:46.915833 127170 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1221 19:46:46.921213 127170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1221 19:46:46.937559 127170 kubeadm.go:884] updating cluster {Name:addons-659513 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
3 ClusterName:addons-659513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1221 19:46:46.937710 127170 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
I1221 19:46:46.937777 127170 ssh_runner.go:195] Run: sudo crictl images --output json
I1221 19:46:46.975983 127170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
I1221 19:46:46.976068 127170 ssh_runner.go:195] Run: which lz4
I1221 19:46:46.980804 127170 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1221 19:46:46.985796 127170 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1221 19:46:46.985844 127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340314847 bytes)
I1221 19:46:48.203533 127170 crio.go:462] duration metric: took 1.222757681s to copy over tarball
I1221 19:46:48.203618 127170 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1221 19:46:49.669329 127170 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.465669427s)
I1221 19:46:49.669364 127170 crio.go:469] duration metric: took 1.46579883s to extract the tarball
I1221 19:46:49.669375 127170 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1221 19:46:49.706100 127170 ssh_runner.go:195] Run: sudo crictl images --output json
I1221 19:46:49.755759 127170 crio.go:514] all images are preloaded for cri-o runtime.
I1221 19:46:49.755785 127170 cache_images.go:86] Images are preloaded, skipping loading
I1221 19:46:49.755795 127170 kubeadm.go:935] updating node { 192.168.39.164 8443 v1.34.3 crio true true} ...
I1221 19:46:49.755938 127170 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-659513 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.164
[Install]
config:
{KubernetesVersion:v1.34.3 ClusterName:addons-659513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1221 19:46:49.756025 127170 ssh_runner.go:195] Run: crio config
I1221 19:46:49.800898 127170 cni.go:84] Creating CNI manager for ""
I1221 19:46:49.800923 127170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1221 19:46:49.800945 127170 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1221 19:46:49.800967 127170 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.164 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-659513 NodeName:addons-659513 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1221 19:46:49.801085 127170 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.164
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-659513"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.164"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.164"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1221 19:46:49.801147 127170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
I1221 19:46:49.813256 127170 binaries.go:51] Found k8s binaries, skipping transfer
I1221 19:46:49.813368 127170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1221 19:46:49.825090 127170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I1221 19:46:49.845153 127170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1221 19:46:49.864927 127170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
I1221 19:46:49.885107 127170 ssh_runner.go:195] Run: grep 192.168.39.164 control-plane.minikube.internal$ /etc/hosts
I1221 19:46:49.889281 127170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.164 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1221 19:46:49.903809 127170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1221 19:46:50.042222 127170 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1221 19:46:50.075783 127170 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513 for IP: 192.168.39.164
I1221 19:46:50.075807 127170 certs.go:195] generating shared ca certs ...
I1221 19:46:50.075823 127170 certs.go:227] acquiring lock for ca certs: {Name:mkda19a66cdf101dd9d66a3219f3492b9fb00ea9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1221 19:46:50.075965 127170 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-122429/.minikube/ca.key
I1221 19:46:50.181556 127170 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-122429/.minikube/ca.crt ...
I1221 19:46:50.181591 127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/ca.crt: {Name:mk2b5cc8837700d02edda3aea25effa33f4607cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1221 19:46:50.181770 127170 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-122429/.minikube/ca.key ...
I1221 19:46:50.181781 127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/ca.key: {Name:mk4e031103f29442df42078ad479c1dddebebca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1221 19:46:50.181860 127170 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-122429/.minikube/proxy-client-ca.key
I1221 19:46:50.217804 127170 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-122429/.minikube/proxy-client-ca.crt ...
I1221 19:46:50.217834 127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/proxy-client-ca.crt: {Name:mk45376a283e1faa28fc0c4e184c4fc9d95a74a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1221 19:46:50.218000 127170 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-122429/.minikube/proxy-client-ca.key ...
I1221 19:46:50.218012 127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/proxy-client-ca.key: {Name:mk47096f7d96737d2b148e108e99e4246fde4cf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1221 19:46:50.218086 127170 certs.go:257] generating profile certs ...
I1221 19:46:50.218143 127170 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.key
I1221 19:46:50.218154 127170 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt with IP's: []
I1221 19:46:50.251348 127170 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt ...
I1221 19:46:50.251376 127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: {Name:mkb06e42755b88e2b2958dafd8bf92399d2404c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1221 19:46:50.251527 127170 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.key ...
I1221 19:46:50.251542 127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.key: {Name:mka38165d0c3f24db3945be76ba9af293cd5085c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1221 19:46:50.251620 127170 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.key.2a85b83d
I1221 19:46:50.251640 127170 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.crt.2a85b83d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.164]
I1221 19:46:50.350045 127170 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.crt.2a85b83d ...
I1221 19:46:50.350082 127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.crt.2a85b83d: {Name:mk4bcba66f72c95a0f4d5cbb28ab113907a605c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1221 19:46:50.350282 127170 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.key.2a85b83d ...
I1221 19:46:50.350301 127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.key.2a85b83d: {Name:mk41a18cda5bca449a80cf2a89fa2251133f71d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1221 19:46:50.350393 127170 certs.go:382] copying /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.crt.2a85b83d -> /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.crt
I1221 19:46:50.350481 127170 certs.go:386] copying /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.key.2a85b83d -> /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.key
I1221 19:46:50.350556 127170 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/proxy-client.key
I1221 19:46:50.350582 127170 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/proxy-client.crt with IP's: []
I1221 19:46:50.447003 127170 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/proxy-client.crt ...
I1221 19:46:50.447035 127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/proxy-client.crt: {Name:mk24f8bec2a16680dca4d7845a13d5a21324eaa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1221 19:46:50.447227 127170 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/proxy-client.key ...
I1221 19:46:50.447253 127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/proxy-client.key: {Name:mk3427ee7161a4c5f2da22ea973d1cc86c00d395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1221 19:46:50.447511 127170 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca-key.pem (1675 bytes)
I1221 19:46:50.447560 127170 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem (1082 bytes)
I1221 19:46:50.447601 127170 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/cert.pem (1123 bytes)
I1221 19:46:50.447633 127170 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/key.pem (1679 bytes)
I1221 19:46:50.448280 127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1221 19:46:50.479658 127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1221 19:46:50.508097 127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1221 19:46:50.536972 127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1221 19:46:50.564984 127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1221 19:46:50.593602 127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1221 19:46:50.622142 127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1221 19:46:50.650372 127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1221 19:46:50.679003 127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1221 19:46:50.707647 127170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I1221 19:46:50.728048 127170 ssh_runner.go:195] Run: openssl version
I1221 19:46:50.734462 127170 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1221 19:46:50.748154 127170 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1221 19:46:50.760825 127170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1221 19:46:50.766526 127170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
I1221 19:46:50.766588 127170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1221 19:46:50.776068 127170 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1221 19:46:50.788150 127170 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1221 19:46:50.800669 127170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1221 19:46:50.806135 127170 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1221 19:46:50.806191 127170 kubeadm.go:401] StartCluster: {Name:addons-659513 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 C
lusterName:addons-659513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1221 19:46:50.806275 127170 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1221 19:46:50.806345 127170 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1221 19:46:50.840534 127170 cri.go:96] found id: ""
I1221 19:46:50.840615 127170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1221 19:46:50.853027 127170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1221 19:46:50.864501 127170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1221 19:46:50.875558 127170 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1221 19:46:50.875575 127170 kubeadm.go:158] found existing configuration files:
I1221 19:46:50.875615 127170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1221 19:46:50.885556 127170 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1221 19:46:50.885621 127170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1221 19:46:50.896335 127170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1221 19:46:50.906565 127170 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1221 19:46:50.906631 127170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1221 19:46:50.917944 127170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1221 19:46:50.928089 127170 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1221 19:46:50.928158 127170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1221 19:46:50.939811 127170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1221 19:46:50.950719 127170 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1221 19:46:50.950785 127170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1221 19:46:50.962221 127170 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1221 19:46:51.118864 127170 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1221 19:47:02.465041 127170 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
I1221 19:47:02.465130 127170 kubeadm.go:319] [preflight] Running pre-flight checks
I1221 19:47:02.465234 127170 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1221 19:47:02.465350 127170 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1221 19:47:02.465447 127170 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1221 19:47:02.465516 127170 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1221 19:47:02.466977 127170 out.go:252] - Generating certificates and keys ...
I1221 19:47:02.467036 127170 kubeadm.go:319] [certs] Using existing ca certificate authority
I1221 19:47:02.467089 127170 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1221 19:47:02.467166 127170 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1221 19:47:02.467249 127170 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1221 19:47:02.467353 127170 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1221 19:47:02.467431 127170 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1221 19:47:02.467528 127170 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1221 19:47:02.467713 127170 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-659513 localhost] and IPs [192.168.39.164 127.0.0.1 ::1]
I1221 19:47:02.467794 127170 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1221 19:47:02.467979 127170 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-659513 localhost] and IPs [192.168.39.164 127.0.0.1 ::1]
I1221 19:47:02.468074 127170 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1221 19:47:02.468171 127170 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1221 19:47:02.468241 127170 kubeadm.go:319] [certs] Generating "sa" key and public key
I1221 19:47:02.468332 127170 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1221 19:47:02.468381 127170 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1221 19:47:02.468468 127170 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1221 19:47:02.468551 127170 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1221 19:47:02.468622 127170 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1221 19:47:02.468703 127170 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1221 19:47:02.468775 127170 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1221 19:47:02.468869 127170 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1221 19:47:02.470451 127170 out.go:252] - Booting up control plane ...
I1221 19:47:02.470571 127170 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1221 19:47:02.470675 127170 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1221 19:47:02.470807 127170 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1221 19:47:02.470999 127170 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1221 19:47:02.471157 127170 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1221 19:47:02.471301 127170 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1221 19:47:02.471377 127170 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1221 19:47:02.471410 127170 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1221 19:47:02.471532 127170 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1221 19:47:02.471652 127170 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1221 19:47:02.471741 127170 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001667437s
I1221 19:47:02.471860 127170 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1221 19:47:02.471969 127170 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.164:8443/livez
I1221 19:47:02.472090 127170 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1221 19:47:02.472192 127170 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1221 19:47:02.472305 127170 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.375292802s
I1221 19:47:02.472365 127170 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.782821207s
I1221 19:47:02.472423 127170 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.50240184s
I1221 19:47:02.472536 127170 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1221 19:47:02.472683 127170 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1221 19:47:02.472759 127170 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1221 19:47:02.473010 127170 kubeadm.go:319] [mark-control-plane] Marking the node addons-659513 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1221 19:47:02.473087 127170 kubeadm.go:319] [bootstrap-token] Using token: opiai1.qnvll8epf3ex3bpn
I1221 19:47:02.475272 127170 out.go:252] - Configuring RBAC rules ...
I1221 19:47:02.475361 127170 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1221 19:47:02.475441 127170 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1221 19:47:02.475600 127170 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1221 19:47:02.475727 127170 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1221 19:47:02.475824 127170 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1221 19:47:02.475892 127170 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1221 19:47:02.475982 127170 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1221 19:47:02.476047 127170 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1221 19:47:02.476126 127170 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1221 19:47:02.476135 127170 kubeadm.go:319]
I1221 19:47:02.476229 127170 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1221 19:47:02.476237 127170 kubeadm.go:319]
I1221 19:47:02.476342 127170 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1221 19:47:02.476355 127170 kubeadm.go:319]
I1221 19:47:02.476391 127170 kubeadm.go:319] mkdir -p $HOME/.kube
I1221 19:47:02.476478 127170 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1221 19:47:02.476562 127170 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1221 19:47:02.476578 127170 kubeadm.go:319]
I1221 19:47:02.476622 127170 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1221 19:47:02.476628 127170 kubeadm.go:319]
I1221 19:47:02.476677 127170 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1221 19:47:02.476686 127170 kubeadm.go:319]
I1221 19:47:02.476757 127170 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1221 19:47:02.476866 127170 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1221 19:47:02.476961 127170 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1221 19:47:02.476971 127170 kubeadm.go:319]
I1221 19:47:02.477076 127170 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1221 19:47:02.477180 127170 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1221 19:47:02.477188 127170 kubeadm.go:319]
I1221 19:47:02.477301 127170 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token opiai1.qnvll8epf3ex3bpn \
I1221 19:47:02.477433 127170 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:4f35461b95b227e9d1829c929bb399222e80c78f00e691e8dfd0f482c558d3d6 \
I1221 19:47:02.477462 127170 kubeadm.go:319] --control-plane
I1221 19:47:02.477465 127170 kubeadm.go:319]
I1221 19:47:02.477563 127170 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1221 19:47:02.477570 127170 kubeadm.go:319]
I1221 19:47:02.477660 127170 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token opiai1.qnvll8epf3ex3bpn \
I1221 19:47:02.477802 127170 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:4f35461b95b227e9d1829c929bb399222e80c78f00e691e8dfd0f482c558d3d6
I1221 19:47:02.477816 127170 cni.go:84] Creating CNI manager for ""
I1221 19:47:02.477825 127170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1221 19:47:02.479208 127170 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1221 19:47:02.480417 127170 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1221 19:47:02.496005 127170 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1221 19:47:02.525861 127170 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1221 19:47:02.525975 127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1221 19:47:02.525987 127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-659513 minikube.k8s.io/updated_at=2025_12_21T19_47_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c minikube.k8s.io/name=addons-659513 minikube.k8s.io/primary=true
I1221 19:47:02.557089 127170 ops.go:34] apiserver oom_adj: -16
I1221 19:47:02.676740 127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1221 19:47:03.176831 127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1221 19:47:03.676850 127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1221 19:47:04.177421 127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1221 19:47:04.677008 127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1221 19:47:05.177269 127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1221 19:47:05.677211 127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1221 19:47:06.177219 127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1221 19:47:06.677812 127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1221 19:47:07.177342 127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1221 19:47:07.287157 127170 kubeadm.go:1114] duration metric: took 4.76123301s to wait for elevateKubeSystemPrivileges
I1221 19:47:07.287211 127170 kubeadm.go:403] duration metric: took 16.481024379s to StartCluster
I1221 19:47:07.287247 127170 settings.go:142] acquiring lock: {Name:mk8bc901164ee13eb5278832ae429ca9408ea551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1221 19:47:07.287390 127170 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22179-122429/kubeconfig
I1221 19:47:07.287772 127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/kubeconfig: {Name:mke0d928f8059efde48d6d18bc9cf0e4672401c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1221 19:47:07.287989 127170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1221 19:47:07.288010 127170 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
I1221 19:47:07.288075 127170 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1221 19:47:07.288188 127170 addons.go:70] Setting default-storageclass=true in profile "addons-659513"
I1221 19:47:07.288208 127170 addons.go:70] Setting yakd=true in profile "addons-659513"
I1221 19:47:07.288225 127170 addons.go:239] Setting addon yakd=true in "addons-659513"
I1221 19:47:07.288223 127170 config.go:182] Loaded profile config "addons-659513": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:47:07.288233 127170 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-659513"
I1221 19:47:07.288212 127170 addons.go:70] Setting cloud-spanner=true in profile "addons-659513"
I1221 19:47:07.288256 127170 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-659513"
I1221 19:47:07.288270 127170 addons.go:70] Setting registry=true in profile "addons-659513"
I1221 19:47:07.288224 127170 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-659513"
I1221 19:47:07.288218 127170 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-659513"
I1221 19:47:07.288389 127170 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-659513"
I1221 19:47:07.288257 127170 addons.go:239] Setting addon cloud-spanner=true in "addons-659513"
I1221 19:47:07.288516 127170 host.go:66] Checking if "addons-659513" exists ...
I1221 19:47:07.288552 127170 host.go:66] Checking if "addons-659513" exists ...
I1221 19:47:07.288265 127170 addons.go:70] Setting storage-provisioner=true in profile "addons-659513"
I1221 19:47:07.288646 127170 addons.go:239] Setting addon storage-provisioner=true in "addons-659513"
I1221 19:47:07.288691 127170 host.go:66] Checking if "addons-659513" exists ...
I1221 19:47:07.288275 127170 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-659513"
I1221 19:47:07.288964 127170 host.go:66] Checking if "addons-659513" exists ...
I1221 19:47:07.288278 127170 host.go:66] Checking if "addons-659513" exists ...
I1221 19:47:07.288281 127170 addons.go:239] Setting addon registry=true in "addons-659513"
I1221 19:47:07.289470 127170 host.go:66] Checking if "addons-659513" exists ...
I1221 19:47:07.288281 127170 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-659513"
I1221 19:47:07.289546 127170 host.go:66] Checking if "addons-659513" exists ...
I1221 19:47:07.288282 127170 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-659513"
I1221 19:47:07.289970 127170 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-659513"
I1221 19:47:07.288282 127170 addons.go:70] Setting ingress-dns=true in profile "addons-659513"
I1221 19:47:07.290284 127170 addons.go:239] Setting addon ingress-dns=true in "addons-659513"
I1221 19:47:07.290321 127170 host.go:66] Checking if "addons-659513" exists ...
I1221 19:47:07.288286 127170 addons.go:70] Setting inspektor-gadget=true in profile "addons-659513"
I1221 19:47:07.290517 127170 addons.go:239] Setting addon inspektor-gadget=true in "addons-659513"
I1221 19:47:07.290556 127170 host.go:66] Checking if "addons-659513" exists ...
I1221 19:47:07.288287 127170 addons.go:70] Setting volcano=true in profile "addons-659513"
I1221 19:47:07.290743 127170 addons.go:239] Setting addon volcano=true in "addons-659513"
I1221 19:47:07.290779 127170 host.go:66] Checking if "addons-659513" exists ...
I1221 19:47:07.288285 127170 addons.go:70] Setting registry-creds=true in profile "addons-659513"
I1221 19:47:07.291387 127170 addons.go:239] Setting addon registry-creds=true in "addons-659513"
I1221 19:47:07.291424 127170 host.go:66] Checking if "addons-659513" exists ...
I1221 19:47:07.288287 127170 addons.go:70] Setting gcp-auth=true in profile "addons-659513"
I1221 19:47:07.291660 127170 mustload.go:66] Loading cluster: addons-659513
I1221 19:47:07.291869 127170 config.go:182] Loaded profile config "addons-659513": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:47:07.288290 127170 addons.go:70] Setting metrics-server=true in profile "addons-659513"
I1221 19:47:07.291913 127170 addons.go:239] Setting addon metrics-server=true in "addons-659513"
I1221 19:47:07.291952 127170 host.go:66] Checking if "addons-659513" exists ...
I1221 19:47:07.288291 127170 addons.go:70] Setting volumesnapshots=true in profile "addons-659513"
I1221 19:47:07.292277 127170 addons.go:239] Setting addon volumesnapshots=true in "addons-659513"
I1221 19:47:07.292307 127170 host.go:66] Checking if "addons-659513" exists ...
I1221 19:47:07.288293 127170 addons.go:70] Setting ingress=true in profile "addons-659513"
I1221 19:47:07.292544 127170 addons.go:239] Setting addon ingress=true in "addons-659513"
I1221 19:47:07.292582 127170 host.go:66] Checking if "addons-659513" exists ...
I1221 19:47:07.292976 127170 out.go:179] * Verifying Kubernetes components...
I1221 19:47:07.296731 127170 addons.go:239] Setting addon default-storageclass=true in "addons-659513"
I1221 19:47:07.296768 127170 host.go:66] Checking if "addons-659513" exists ...
I1221 19:47:07.296985 127170 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1221 19:47:07.296992 127170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1221 19:47:07.297059 127170 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.46
I1221 19:47:07.297072 127170 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1221 19:47:07.297060 127170 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1221 19:47:07.298310 127170 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.6
I1221 19:47:07.298367 127170 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1221 19:47:07.298713 127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1221 19:47:07.298409 127170 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1221 19:47:07.298945 127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1221 19:47:07.299060 127170 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1221 19:47:07.299093 127170 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1221 19:47:07.299409 127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1221 19:47:07.299126 127170 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-659513"
I1221 19:47:07.299166 127170 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
W1221 19:47:07.299296 127170 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1221 19:47:07.299511 127170 host.go:66] Checking if "addons-659513" exists ...
I1221 19:47:07.299723 127170 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1221 19:47:07.299756 127170 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
I1221 19:47:07.299777 127170 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1221 19:47:07.300167 127170 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1221 19:47:07.299843 127170 host.go:66] Checking if "addons-659513" exists ...
I1221 19:47:07.300542 127170 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1221 19:47:07.300579 127170 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1221 19:47:07.300976 127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1221 19:47:07.301420 127170 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1221 19:47:07.301438 127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1221 19:47:07.302341 127170 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1221 19:47:07.302392 127170 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1221 19:47:07.302413 127170 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1221 19:47:07.302385 127170 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1221 19:47:07.302452 127170 out.go:179] - Using image docker.io/registry:3.0.0
I1221 19:47:07.302548 127170 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1221 19:47:07.303522 127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1221 19:47:07.302847 127170 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1221 19:47:07.303590 127170 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1221 19:47:07.303631 127170 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1221 19:47:07.303674 127170 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1221 19:47:07.303690 127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1221 19:47:07.303675 127170 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1221 19:47:07.304338 127170 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1221 19:47:07.304358 127170 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1221 19:47:07.304369 127170 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1221 19:47:07.304382 127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1221 19:47:07.304455 127170 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1221 19:47:07.305402 127170 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
I1221 19:47:07.305443 127170 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1221 19:47:07.307133 127170 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1221 19:47:07.308280 127170 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1221 19:47:07.308293 127170 out.go:179] - Using image docker.io/busybox:stable
I1221 19:47:07.309140 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.309459 127170 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1221 19:47:07.309608 127170 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1221 19:47:07.309629 127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1221 19:47:07.309617 127170 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1221 19:47:07.309679 127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1221 19:47:07.310667 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.311178 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:47:07.311214 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.311269 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.311654 127170 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1221 19:47:07.312272 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:47:07.312311 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.312259 127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
I1221 19:47:07.312621 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.312641 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.312704 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.313133 127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
I1221 19:47:07.313247 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:47:07.313289 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.314070 127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
I1221 19:47:07.314188 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:47:07.314265 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:47:07.314291 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.314385 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.314399 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:47:07.314436 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.314597 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.314603 127170 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1221 19:47:07.314882 127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
I1221 19:47:07.315008 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.315095 127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
I1221 19:47:07.315106 127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
I1221 19:47:07.316005 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:47:07.316043 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.316194 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:47:07.316245 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.316353 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.316408 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.316696 127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
I1221 19:47:07.316902 127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
I1221 19:47:07.317366 127170 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1221 19:47:07.317366 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:47:07.317424 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.317436 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.317455 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.317557 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:47:07.317596 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.317697 127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
I1221 19:47:07.318015 127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
I1221 19:47:07.318334 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:47:07.318376 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.318532 127170 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1221 19:47:07.318547 127170 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1221 19:47:07.318595 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:47:07.318631 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.319032 127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
I1221 19:47:07.319044 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.319063 127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
I1221 19:47:07.319443 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.319705 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:47:07.319741 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.319908 127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
I1221 19:47:07.320097 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:47:07.320131 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.320321 127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
I1221 19:47:07.322034 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.322407 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:47:07.322442 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:07.322632 127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
W1221 19:47:07.451672 127170 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54306->192.168.39.164:22: read: connection reset by peer
I1221 19:47:07.451715 127170 retry.go:84] will retry after 200ms: ssh: handshake failed: read tcp 192.168.39.1:54306->192.168.39.164:22: read: connection reset by peer
W1221 19:47:07.461092 127170 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54314->192.168.39.164:22: read: connection reset by peer
W1221 19:47:07.748560 127170 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54344->192.168.39.164:22: read: connection reset by peer
I1221 19:47:08.029768 127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1221 19:47:08.215629 127170 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1221 19:47:08.215670 127170 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1221 19:47:08.247352 127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1221 19:47:08.304317 127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
I1221 19:47:08.362560 127170 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1221 19:47:08.362600 127170 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1221 19:47:08.400131 127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1221 19:47:08.411258 127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1221 19:47:08.416375 127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1221 19:47:08.422943 127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1221 19:47:08.433592 127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1221 19:47:08.445886 127170 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1221 19:47:08.445930 127170 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1221 19:47:08.458678 127170 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1221 19:47:08.458698 127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1221 19:47:08.466761 127170 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1221 19:47:08.466785 127170 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1221 19:47:08.552736 127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1221 19:47:08.778028 127170 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1221 19:47:08.778065 127170 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1221 19:47:08.842461 127170 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.554435042s)
I1221 19:47:08.842557 127170 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.545534321s)
I1221 19:47:08.842637 127170 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1221 19:47:08.842712 127170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1221 19:47:09.067180 127170 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1221 19:47:09.067219 127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1221 19:47:09.100900 127170 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1221 19:47:09.100946 127170 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1221 19:47:09.102927 127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1221 19:47:09.115095 127170 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1221 19:47:09.115133 127170 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1221 19:47:09.212190 127170 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1221 19:47:09.212226 127170 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1221 19:47:09.358832 127170 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1221 19:47:09.358871 127170 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1221 19:47:09.501752 127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1221 19:47:09.519361 127170 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1221 19:47:09.519400 127170 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1221 19:47:09.525481 127170 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1221 19:47:09.525515 127170 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1221 19:47:09.576025 127170 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1221 19:47:09.576105 127170 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1221 19:47:09.688923 127170 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1221 19:47:09.688957 127170 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1221 19:47:09.802085 127170 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1221 19:47:09.802121 127170 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1221 19:47:09.911163 127170 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1221 19:47:09.911189 127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1221 19:47:09.956364 127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1221 19:47:09.980942 127170 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1221 19:47:09.980975 127170 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1221 19:47:10.160269 127170 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1221 19:47:10.160299 127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1221 19:47:10.254546 127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1221 19:47:10.321600 127170 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1221 19:47:10.321632 127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1221 19:47:10.482774 127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1221 19:47:10.618173 127170 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1221 19:47:10.618212 127170 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1221 19:47:10.948074 127170 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1221 19:47:10.948104 127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1221 19:47:11.445251 127170 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1221 19:47:11.445280 127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1221 19:47:11.779673 127170 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1221 19:47:11.779708 127170 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1221 19:47:11.965025 127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1221 19:47:14.734067 127170 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1221 19:47:14.737199 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:14.737695 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:47:14.737724 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:14.737902 127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
I1221 19:47:15.176346 127170 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1221 19:47:15.398929 127170 addons.go:239] Setting addon gcp-auth=true in "addons-659513"
I1221 19:47:15.399017 127170 host.go:66] Checking if "addons-659513" exists ...
I1221 19:47:15.401135 127170 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1221 19:47:15.403726 127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:15.404170 127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
I1221 19:47:15.404208 127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
I1221 19:47:15.404432 127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
I1221 19:47:15.677540 127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.430146382s)
I1221 19:47:15.677682 127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.373319843s)
I1221 19:47:15.677731 127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.647924055s)
I1221 19:47:15.677748 127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.277585166s)
I1221 19:47:15.677763 127170 addons.go:495] Verifying addon ingress=true in "addons-659513"
I1221 19:47:15.677875 127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.26147828s)
I1221 19:47:15.677842 127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.266542427s)
I1221 19:47:15.677947 127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.254981457s)
I1221 19:47:15.677983 127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.244374424s)
I1221 19:47:15.678035 127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.125273778s)
I1221 19:47:15.678074 127170 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.835410534s)
I1221 19:47:15.678095 127170 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.83536152s)
I1221 19:47:15.678118 127170 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1221 19:47:15.678254 127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.575295358s)
I1221 19:47:15.678328 127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.176537181s)
I1221 19:47:15.678358 127170 addons.go:495] Verifying addon registry=true in "addons-659513"
I1221 19:47:15.678887 127170 node_ready.go:35] waiting up to 6m0s for node "addons-659513" to be "Ready" ...
I1221 19:47:15.678449 127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.72204242s)
I1221 19:47:15.678957 127170 addons.go:495] Verifying addon metrics-server=true in "addons-659513"
I1221 19:47:15.678527 127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.423910096s)
I1221 19:47:15.679768 127170 out.go:179] * Verifying ingress addon...
I1221 19:47:15.680690 127170 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-659513 service yakd-dashboard -n yakd-dashboard
I1221 19:47:15.680690 127170 out.go:179] * Verifying registry addon...
I1221 19:47:15.682005 127170 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1221 19:47:15.683208 127170 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1221 19:47:15.686907 127170 node_ready.go:49] node "addons-659513" is "Ready"
I1221 19:47:15.686933 127170 node_ready.go:38] duration metric: took 8.002783ms for node "addons-659513" to be "Ready" ...
I1221 19:47:15.686949 127170 api_server.go:52] waiting for apiserver process to appear ...
I1221 19:47:15.686988 127170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1221 19:47:15.716365 127170 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1221 19:47:15.716391 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:15.724344 127170 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1221 19:47:15.724366 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
W1221 19:47:15.762064 127170 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I1221 19:47:16.195857 127170 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-659513" context rescaled to 1 replicas
I1221 19:47:16.265102 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:16.267651 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:16.718582 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:16.718655 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:16.719351 127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.236526092s)
W1221 19:47:16.719403 127170 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1221 19:47:16.719442 127170 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1221 19:47:17.064107 127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1221 19:47:17.192991 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:17.193168 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:17.703094 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:17.703449 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:17.761575 127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.796483925s)
I1221 19:47:17.761634 127170 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-659513"
I1221 19:47:17.761643 127170 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.360472517s)
I1221 19:47:17.761710 127170 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.074701489s)
I1221 19:47:17.761740 127170 api_server.go:72] duration metric: took 10.473697059s to wait for apiserver process to appear ...
I1221 19:47:17.761796 127170 api_server.go:88] waiting for apiserver healthz status ...
I1221 19:47:17.761824 127170 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
I1221 19:47:17.763165 127170 out.go:179] * Verifying csi-hostpath-driver addon...
I1221 19:47:17.763172 127170 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1221 19:47:17.764756 127170 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1221 19:47:17.765531 127170 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1221 19:47:17.766343 127170 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1221 19:47:17.766364 127170 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1221 19:47:17.770759 127170 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
ok
I1221 19:47:17.771821 127170 api_server.go:141] control plane version: v1.34.3
I1221 19:47:17.771843 127170 api_server.go:131] duration metric: took 10.040248ms to wait for apiserver health ...
I1221 19:47:17.771853 127170 system_pods.go:43] waiting for kube-system pods to appear ...
I1221 19:47:17.783077 127170 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1221 19:47:17.783110 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:17.803103 127170 system_pods.go:59] 20 kube-system pods found
I1221 19:47:17.803153 127170 system_pods.go:61] "amd-gpu-device-plugin-96g9f" [ae1a4e49-3725-4452-ade4-01b3af2dfe3f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1221 19:47:17.803167 127170 system_pods.go:61] "coredns-66bc5c9577-26xrr" [dc85b7d7-8740-43f9-82ce-8e57e4f1a4d1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1221 19:47:17.803178 127170 system_pods.go:61] "coredns-66bc5c9577-wmlm4" [8d0b39bf-67af-49f4-bad3-27f7b7667bfd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1221 19:47:17.803187 127170 system_pods.go:61] "csi-hostpath-attacher-0" [bd327965-2ca8-4ea6-a549-0280d8857276] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1221 19:47:17.803199 127170 system_pods.go:61] "csi-hostpath-resizer-0" [9e4fa2a5-9ead-47f3-976f-8a05bf1aefe8] Pending
I1221 19:47:17.803207 127170 system_pods.go:61] "csi-hostpathplugin-8pbdl" [9db03d51-8cde-4534-a9b4-d5e1468a87b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1221 19:47:17.803214 127170 system_pods.go:61] "etcd-addons-659513" [d6e79a60-b93a-4d72-9a6a-27a83696ac1f] Running
I1221 19:47:17.803224 127170 system_pods.go:61] "kube-apiserver-addons-659513" [2f7bb8d7-56ea-4e2d-be31-3abb043240f9] Running
I1221 19:47:17.803230 127170 system_pods.go:61] "kube-controller-manager-addons-659513" [f8a6a122-5dd0-433a-852b-1265788f9d30] Running
I1221 19:47:17.803238 127170 system_pods.go:61] "kube-ingress-dns-minikube" [4c506cde-8495-4847-95bb-99f92a15aeb1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1221 19:47:17.803244 127170 system_pods.go:61] "kube-proxy-fbvb9" [f81d5845-1ca3-4d59-b971-848c73663c2d] Running
I1221 19:47:17.803250 127170 system_pods.go:61] "kube-scheduler-addons-659513" [230ee0ee-e72a-4131-a7ff-5774926289ad] Running
I1221 19:47:17.803259 127170 system_pods.go:61] "metrics-server-85b7d694d7-v72tn" [68904163-d7f9-411e-9a48-c014af0cef06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1221 19:47:17.803267 127170 system_pods.go:61] "nvidia-device-plugin-daemonset-ql2hl" [76700fd6-090f-485b-97c5-07cea983a62e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1221 19:47:17.803275 127170 system_pods.go:61] "registry-6b586f9694-dvnl4" [56216ff6-db76-45d5-945d-2bf21a023ebf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1221 19:47:17.803283 127170 system_pods.go:61] "registry-creds-764b6fb674-xk9c7" [d8a47e94-fba3-4da4-9a39-6f7db289cd2f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1221 19:47:17.803304 127170 system_pods.go:61] "registry-proxy-kntxd" [1893f6cf-53cb-4c2d-acea-6739ff305373] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1221 19:47:17.803312 127170 system_pods.go:61] "snapshot-controller-7d9fbc56b8-k6z6g" [661ae2cd-24eb-42e0-bcb7-8eb9cda59e83] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1221 19:47:17.803320 127170 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rr67d" [0fe3f2c2-9357-4203-83e5-791658b87779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1221 19:47:17.803328 127170 system_pods.go:61] "storage-provisioner" [97ccdeb0-0aa9-4509-9ca8-0d067721e67a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1221 19:47:17.803337 127170 system_pods.go:74] duration metric: took 31.476578ms to wait for pod list to return data ...
I1221 19:47:17.803348 127170 default_sa.go:34] waiting for default service account to be created ...
I1221 19:47:17.809983 127170 default_sa.go:45] found service account: "default"
I1221 19:47:17.810010 127170 default_sa.go:55] duration metric: took 6.654975ms for default service account to be created ...
I1221 19:47:17.810020 127170 system_pods.go:116] waiting for k8s-apps to be running ...
I1221 19:47:17.845997 127170 system_pods.go:86] 20 kube-system pods found
I1221 19:47:17.846035 127170 system_pods.go:89] "amd-gpu-device-plugin-96g9f" [ae1a4e49-3725-4452-ade4-01b3af2dfe3f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1221 19:47:17.846044 127170 system_pods.go:89] "coredns-66bc5c9577-26xrr" [dc85b7d7-8740-43f9-82ce-8e57e4f1a4d1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1221 19:47:17.846052 127170 system_pods.go:89] "coredns-66bc5c9577-wmlm4" [8d0b39bf-67af-49f4-bad3-27f7b7667bfd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1221 19:47:17.846057 127170 system_pods.go:89] "csi-hostpath-attacher-0" [bd327965-2ca8-4ea6-a549-0280d8857276] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1221 19:47:17.846062 127170 system_pods.go:89] "csi-hostpath-resizer-0" [9e4fa2a5-9ead-47f3-976f-8a05bf1aefe8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1221 19:47:17.846067 127170 system_pods.go:89] "csi-hostpathplugin-8pbdl" [9db03d51-8cde-4534-a9b4-d5e1468a87b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1221 19:47:17.846075 127170 system_pods.go:89] "etcd-addons-659513" [d6e79a60-b93a-4d72-9a6a-27a83696ac1f] Running
I1221 19:47:17.846079 127170 system_pods.go:89] "kube-apiserver-addons-659513" [2f7bb8d7-56ea-4e2d-be31-3abb043240f9] Running
I1221 19:47:17.846083 127170 system_pods.go:89] "kube-controller-manager-addons-659513" [f8a6a122-5dd0-433a-852b-1265788f9d30] Running
I1221 19:47:17.846091 127170 system_pods.go:89] "kube-ingress-dns-minikube" [4c506cde-8495-4847-95bb-99f92a15aeb1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1221 19:47:17.846095 127170 system_pods.go:89] "kube-proxy-fbvb9" [f81d5845-1ca3-4d59-b971-848c73663c2d] Running
I1221 19:47:17.846101 127170 system_pods.go:89] "kube-scheduler-addons-659513" [230ee0ee-e72a-4131-a7ff-5774926289ad] Running
I1221 19:47:17.846108 127170 system_pods.go:89] "metrics-server-85b7d694d7-v72tn" [68904163-d7f9-411e-9a48-c014af0cef06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1221 19:47:17.846117 127170 system_pods.go:89] "nvidia-device-plugin-daemonset-ql2hl" [76700fd6-090f-485b-97c5-07cea983a62e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1221 19:47:17.846126 127170 system_pods.go:89] "registry-6b586f9694-dvnl4" [56216ff6-db76-45d5-945d-2bf21a023ebf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1221 19:47:17.846133 127170 system_pods.go:89] "registry-creds-764b6fb674-xk9c7" [d8a47e94-fba3-4da4-9a39-6f7db289cd2f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1221 19:47:17.846138 127170 system_pods.go:89] "registry-proxy-kntxd" [1893f6cf-53cb-4c2d-acea-6739ff305373] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1221 19:47:17.846144 127170 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k6z6g" [661ae2cd-24eb-42e0-bcb7-8eb9cda59e83] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1221 19:47:17.846151 127170 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rr67d" [0fe3f2c2-9357-4203-83e5-791658b87779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1221 19:47:17.846155 127170 system_pods.go:89] "storage-provisioner" [97ccdeb0-0aa9-4509-9ca8-0d067721e67a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1221 19:47:17.846163 127170 system_pods.go:126] duration metric: took 36.137486ms to wait for k8s-apps to be running ...
I1221 19:47:17.846172 127170 system_svc.go:44] waiting for kubelet service to be running ....
I1221 19:47:17.846226 127170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1221 19:47:17.938224 127170 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1221 19:47:17.938269 127170 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1221 19:47:18.021696 127170 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1221 19:47:18.021728 127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1221 19:47:18.095285 127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1221 19:47:18.192036 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:18.192631 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:18.271019 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:18.688359 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:18.691190 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:18.774465 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:19.095205 127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.031038885s)
I1221 19:47:19.095315 127170 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.249059717s)
I1221 19:47:19.095347 127170 system_svc.go:56] duration metric: took 1.249171811s WaitForService to wait for kubelet
I1221 19:47:19.095357 127170 kubeadm.go:587] duration metric: took 11.807314269s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1221 19:47:19.095376 127170 node_conditions.go:102] verifying NodePressure condition ...
I1221 19:47:19.107387 127170 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1221 19:47:19.107419 127170 node_conditions.go:123] node cpu capacity is 2
I1221 19:47:19.107434 127170 node_conditions.go:105] duration metric: took 12.052562ms to run NodePressure ...
I1221 19:47:19.107446 127170 start.go:242] waiting for startup goroutines ...
I1221 19:47:19.192610 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:19.206910 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:19.292979 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:19.325247 127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.229921101s)
I1221 19:47:19.326273 127170 addons.go:495] Verifying addon gcp-auth=true in "addons-659513"
I1221 19:47:19.328264 127170 out.go:179] * Verifying gcp-auth addon...
I1221 19:47:19.329891 127170 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1221 19:47:19.338528 127170 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1221 19:47:19.338546 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:19.688356 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:19.690019 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:19.772735 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:19.837383 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:20.190283 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:20.192016 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:20.272580 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:20.334811 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:20.686245 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:20.687961 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:20.773386 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:20.837847 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:21.198331 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:21.198934 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:21.273259 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:21.334079 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:21.688724 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:21.689418 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:21.775431 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:21.836591 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:22.186958 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:22.189471 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:22.269921 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:22.334798 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:22.688224 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:22.689079 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:22.770684 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:22.834632 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:23.193585 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:23.197271 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:23.273043 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:23.336042 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:23.686581 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:23.687581 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:23.770722 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:23.836325 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:24.189409 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:24.190627 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:24.290113 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:24.334472 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:24.687015 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:24.687087 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:24.769647 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:24.834122 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:25.192066 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:25.192207 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:25.295883 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:25.334330 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:25.686890 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:25.688989 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:25.770217 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:25.836003 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:26.190005 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:26.190157 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:26.291476 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:26.335912 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:26.686546 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:26.687207 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:26.770314 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:26.834969 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:27.188542 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:27.189269 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:27.271544 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:27.335133 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:27.686546 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:27.687516 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:27.772192 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:27.835907 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:28.374683 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:28.377142 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:28.377322 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:28.378399 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:28.688909 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:28.689832 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:28.769656 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:28.842153 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:29.189785 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:29.189803 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:29.271235 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:29.337888 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:29.685411 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:29.687830 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:29.769726 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:29.837390 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:30.258158 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:30.258386 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:30.358780 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:30.359915 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:30.686889 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:30.687030 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:30.769793 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:30.834254 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:31.190444 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:31.191408 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:31.269718 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:31.333811 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:31.686150 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:31.688479 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:31.772084 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:31.835402 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:32.243342 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:32.244721 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:32.272259 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:32.342405 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:32.690645 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:32.690684 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:32.770264 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:32.835389 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:33.192566 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:33.193048 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:33.271650 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:33.333391 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:33.688268 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:33.690248 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:33.770081 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:33.834932 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:34.187525 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:34.189108 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:34.270692 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:34.334913 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:34.688479 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:34.689216 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:34.770803 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:34.835369 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:35.189720 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:35.190559 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:35.274392 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:35.333328 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:35.686637 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:35.689754 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:35.769554 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:35.834269 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:36.186710 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:36.189388 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:36.412150 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:36.412734 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:36.689675 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:36.689931 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:36.771526 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:36.833911 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:37.189133 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:37.191957 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:37.270778 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:37.334850 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:37.686135 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:37.688330 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:37.771641 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:37.834288 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:38.187439 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:38.190974 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:38.272442 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:38.336330 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:39.000910 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:39.001368 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:39.001927 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:39.002535 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:39.187712 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:39.195436 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:39.289754 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:39.333514 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:39.686676 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:39.689523 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:39.770306 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:39.839372 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:40.187464 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:40.187477 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:40.270514 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:40.333253 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:40.690067 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:40.691052 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:40.771584 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:40.837943 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:41.190918 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:41.191156 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:41.270970 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:41.337019 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:41.685885 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:41.687630 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:41.770753 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:41.835313 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:42.188330 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:42.191033 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:42.273002 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:42.333854 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:42.688567 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:42.688761 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1221 19:47:42.788822 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:42.833917 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:43.189117 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:43.189624 127170 kapi.go:107] duration metric: took 27.506411084s to wait for kubernetes.io/minikube-addons=registry ...
I1221 19:47:43.290180 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:43.334182 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:43.686078 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:43.772026 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:43.835472 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:44.187252 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:44.269731 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:44.337914 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:44.686588 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:44.773877 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:44.833919 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:45.187140 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:45.288529 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:45.333392 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:45.690197 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:45.770774 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:45.838042 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:46.190113 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:46.272773 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:46.336573 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:46.689532 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:46.774677 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:46.835292 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:47.190705 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:47.273313 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:47.347816 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:47.689074 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:47.770870 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:47.836939 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:48.188168 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:48.384536 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:48.385798 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:48.691424 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:48.790134 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:48.833125 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:49.190262 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:49.278840 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:49.334843 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:49.686039 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:49.770075 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:49.834636 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:50.188987 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:50.270968 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:50.336650 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:50.958722 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:50.959184 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:50.960154 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:51.192612 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:51.270316 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:51.335687 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:51.686708 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:51.771074 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:51.838495 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:52.186632 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:52.270778 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:52.335533 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:52.688109 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:53.004230 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:53.006171 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:53.188555 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:53.289098 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:53.389578 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:53.692124 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:53.775630 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:53.837274 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:54.187691 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:54.269996 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:54.334628 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:54.694480 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:54.794383 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:54.837652 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:55.189248 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:55.274658 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:55.333942 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:55.724077 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:55.769877 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:55.841420 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:56.186283 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:56.272664 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:56.336112 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:56.686659 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:56.775727 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:56.873457 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:57.188555 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:57.271080 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:57.333854 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:57.686305 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:57.771338 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:57.834242 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:58.186119 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:58.271386 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:58.336441 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:58.689280 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:58.772570 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:58.840136 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:59.189050 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:59.290081 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:59.390912 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:47:59.687580 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:47:59.770539 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:47:59.839055 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:00.186543 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:00.273104 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:48:00.337400 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:00.691097 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:00.773335 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:48:00.836259 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:01.188795 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:01.288116 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:48:01.333123 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:01.691097 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:01.769875 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:48:01.872453 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:02.186691 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:02.272810 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:48:02.333817 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:02.686681 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:02.770062 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:48:02.832945 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:03.189798 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:03.290544 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:48:03.395959 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:03.688057 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:03.770571 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:48:03.834552 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:04.187411 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:04.274604 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:48:04.335689 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:04.687977 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:04.789367 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:48:04.832672 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:05.186591 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:05.270442 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:48:05.334678 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:05.686771 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:05.770298 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:48:05.838161 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:06.186108 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:06.271662 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:48:06.340546 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:06.688024 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:06.769472 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:48:06.833768 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:07.187208 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:07.288222 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:48:07.388431 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:07.688042 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:07.770791 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:48:07.835270 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:08.187695 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:08.270521 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:48:08.334572 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:08.687147 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:08.769775 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:48:08.834042 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:09.187275 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:09.269777 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:48:09.337691 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:09.686807 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:09.772431 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:48:09.835675 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:10.187163 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:10.269578 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:48:10.334787 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:10.686326 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:10.769518 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1221 19:48:10.833274 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:11.185764 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:11.269430 127170 kapi.go:107] duration metric: took 53.503902518s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1221 19:48:11.333395 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:11.685773 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:11.833941 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:12.185645 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:12.333626 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:12.686857 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:12.834531 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:13.186643 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:13.336080 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:13.686088 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:13.834038 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:14.185170 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:14.333908 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:14.686280 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:14.833136 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:15.186266 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:15.333992 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:15.686562 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:15.836624 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:16.186587 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:16.334050 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:16.685982 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:16.833869 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:17.186444 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:17.333886 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:17.686200 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:17.835068 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:18.185891 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:18.333826 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:18.686721 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:18.833803 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:19.185530 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:19.333799 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:19.686365 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:19.837464 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:20.186201 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:20.334673 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:20.687058 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:20.833782 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:21.187740 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:21.337252 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:21.688698 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:21.835436 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:22.187689 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:22.338852 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:22.690141 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:22.839285 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:23.187072 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:23.334304 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:23.687007 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:23.838261 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:24.188104 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:24.334403 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:24.689315 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:24.844638 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:25.186156 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:25.333504 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:25.688574 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:25.836883 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:26.187057 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:26.333888 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:26.686641 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:26.834347 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:27.337693 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:27.338239 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:27.688219 127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1221 19:48:27.838590 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:28.186425 127170 kapi.go:107] duration metric: took 1m12.504424762s to wait for app.kubernetes.io/name=ingress-nginx ...
I1221 19:48:28.333143 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:28.838595 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:29.336226 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:29.836393 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:30.333900 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:30.834884 127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1221 19:48:31.334121 127170 kapi.go:107] duration metric: took 1m12.004226789s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1221 19:48:31.336072 127170 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-659513 cluster.
I1221 19:48:31.337432 127170 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1221 19:48:31.338687 127170 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1221 19:48:31.339974 127170 out.go:179] * Enabled addons: storage-provisioner, inspektor-gadget, cloud-spanner, ingress-dns, registry-creds, amd-gpu-device-plugin, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I1221 19:48:31.341084 127170 addons.go:530] duration metric: took 1m24.053006477s for enable addons: enabled=[storage-provisioner inspektor-gadget cloud-spanner ingress-dns registry-creds amd-gpu-device-plugin nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I1221 19:48:31.341129 127170 start.go:247] waiting for cluster config update ...
I1221 19:48:31.341159 127170 start.go:256] writing updated cluster config ...
I1221 19:48:31.341477 127170 ssh_runner.go:195] Run: rm -f paused
I1221 19:48:31.348768 127170 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1221 19:48:31.352595 127170 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-26xrr" in "kube-system" namespace to be "Ready" or be gone ...
I1221 19:48:31.359570 127170 pod_ready.go:94] pod "coredns-66bc5c9577-26xrr" is "Ready"
I1221 19:48:31.359594 127170 pod_ready.go:86] duration metric: took 6.977531ms for pod "coredns-66bc5c9577-26xrr" in "kube-system" namespace to be "Ready" or be gone ...
I1221 19:48:31.361757 127170 pod_ready.go:83] waiting for pod "etcd-addons-659513" in "kube-system" namespace to be "Ready" or be gone ...
I1221 19:48:31.366449 127170 pod_ready.go:94] pod "etcd-addons-659513" is "Ready"
I1221 19:48:31.366470 127170 pod_ready.go:86] duration metric: took 4.693992ms for pod "etcd-addons-659513" in "kube-system" namespace to be "Ready" or be gone ...
I1221 19:48:31.368572 127170 pod_ready.go:83] waiting for pod "kube-apiserver-addons-659513" in "kube-system" namespace to be "Ready" or be gone ...
I1221 19:48:31.376619 127170 pod_ready.go:94] pod "kube-apiserver-addons-659513" is "Ready"
I1221 19:48:31.376641 127170 pod_ready.go:86] duration metric: took 8.052067ms for pod "kube-apiserver-addons-659513" in "kube-system" namespace to be "Ready" or be gone ...
I1221 19:48:31.380627 127170 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-659513" in "kube-system" namespace to be "Ready" or be gone ...
I1221 19:48:31.753934 127170 pod_ready.go:94] pod "kube-controller-manager-addons-659513" is "Ready"
I1221 19:48:31.753965 127170 pod_ready.go:86] duration metric: took 373.316303ms for pod "kube-controller-manager-addons-659513" in "kube-system" namespace to be "Ready" or be gone ...
I1221 19:48:31.953852 127170 pod_ready.go:83] waiting for pod "kube-proxy-fbvb9" in "kube-system" namespace to be "Ready" or be gone ...
I1221 19:48:32.354648 127170 pod_ready.go:94] pod "kube-proxy-fbvb9" is "Ready"
I1221 19:48:32.354677 127170 pod_ready.go:86] duration metric: took 400.79518ms for pod "kube-proxy-fbvb9" in "kube-system" namespace to be "Ready" or be gone ...
I1221 19:48:32.553601 127170 pod_ready.go:83] waiting for pod "kube-scheduler-addons-659513" in "kube-system" namespace to be "Ready" or be gone ...
I1221 19:48:32.952951 127170 pod_ready.go:94] pod "kube-scheduler-addons-659513" is "Ready"
I1221 19:48:32.952984 127170 pod_ready.go:86] duration metric: took 399.351812ms for pod "kube-scheduler-addons-659513" in "kube-system" namespace to be "Ready" or be gone ...
I1221 19:48:32.952997 127170 pod_ready.go:40] duration metric: took 1.604197504s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1221 19:48:32.999372 127170 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
I1221 19:48:33.001297 127170 out.go:179] * Done! kubectl is now configured to use "addons-659513" cluster and "default" namespace by default
==> CRI-O <==
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.855720652Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1581b601-ba24-4be7-91ca-0fea0420369c name=/runtime.v1.RuntimeService/Version
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.858523327Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4cae04c3-5a13-4241-ae10-84cab9d56057 name=/runtime.v1.ImageService/ImageFsInfo
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.861491881Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766346691861467321,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551108,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4cae04c3-5a13-4241-ae10-84cab9d56057 name=/runtime.v1.ImageService/ImageFsInfo
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.863414415Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b83c8337-67f7-4552-bacd-bc6732abb512 name=/runtime.v1.RuntimeService/ListContainers
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.863487785Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b83c8337-67f7-4552-bacd-bc6732abb512 name=/runtime.v1.RuntimeService/ListContainers
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.863753159Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24cbfb986d5c3ca3914fc9c982bc0f327cab22d1ddd350a5101dea571b531ae4,PodSandboxId:4a41ca16c86e05833fea9885e77582ab9e4210b58533c294dad1e98eb8c23e08,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766346551717301367,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33f3ec72-704c-4201-8ff2-47eac4b359fe,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae51e8a03b5795078d92e305dc7f1e5145cfd39ed842e2f6cd4495696e266f2,PodSandboxId:7888b298e1b5aa605057f03f82f11340b72fe1725c7675291c5bfa317e408079,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766346516355499449,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7f347285-8b81-4c24-9b59-da519e7b35b0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2ff9b28479efc2d0a0fc471665c02560315c9bd5ab4199b166f0948ed20421,PodSandboxId:d9ecfb70d267b99db7ebc525d5264fa9a7bbcbe02ed1a5b3440a3b1dbc5681cd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1766346507489445293,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-s7ffl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da33936d-a439-40f8-8c05-f7eb37c2a965,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b0d4f54935266cbc9e1f226622aaebfc1148b657bd58b134d2342d3e25a3f81c,PodSandboxId:826c23ce1e03655df44ee44d175bf9e26249c1c8bf7a6a2728bdff10b60fb9d0,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1766346475930855575,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xlmpc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c224b53b-30a7-455e-a46f-71e29fefeebd,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66cb039aacf2b693177d81815eac065ba13bfc5a764199e4c8195fd6b73e4e2e,PodSandboxId:d660c400d7e267153678575b2516776993a2d1b947acd15fe08199d37be2a12a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1766346474567680497,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5skzk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9804e45-e681-4a75-95bc-7d01cadcb23a,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2554c908ec62494980fa93068273fa8ea1b82eed4e4bd6c217c4322a493b009,PodSandboxId:b74c477f14c08fd8448306faeee5b5dafbc9d239ed6d56eac75a37a62b92ecf3,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1766346459113621590,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c506cde-8495-4847-95bb-99f92a15aeb1,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71a3f6b64ed7664ee86b4e70656a368783bc04438140179b672d70912ad173a,PodSandboxId:f10da7fce20989d8abbe13fca81725b8d162ae31e6c482b520a95f2d5a934b20,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1766346444128697039,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-96g9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae1a4e49-3725-4452-ade4-01b3af2dfe3f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:821adec83773446bd435ef05ab329e5d395b6617013fdb8fb83cfe0e620f4c54,PodSandboxId:92681fb3c28b7795a449b0f25cdffe478d433a56caeacac88293fbea4b4a9ee1,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766346439119695360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ccdeb0-0aa9-4509-9ca8-0d067721e67a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaf270f354b50d2a160ee904351fae497bcb13ccd6a6225ad9d4d85ddc5a653f,PodSandboxId:cd23883be15328c292099c0fdf315f96f42d7be535f9563b51624c899365501a,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766346428931047668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-26xrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85b7d7-8740-43f9-82ce-8e57e4f1a4d1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:944524acd2e98b5a8fbda9f53aa5af06093335f472b9c4739bf44311faf57c5f,PodSandboxId:377a6c7a47a554763ff617d6a99ad5dcf3bd6b6c7f46db37c6f1a3d8354c0436,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766346428043980106,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbvb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f81d5845-1ca3-4d59-b971-848c73663c2d,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:835c8c15bbf37d26aca711aed08532cb1b32be70b119565fe2f14cdba5136552,PodSandboxId:f3478431553a6f4f25bb09dff168fda89445871726fd8e3c5b24da2d1e74bb58,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766346416335911432,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8a4cda82f44637052e031d96df1f39,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5546673aec525016ac3db18f88a4fc01cedc9678c9eb422c032127aa209ca951,PodSandboxId:7a9712131b66b91e098794ea27db2bbd0ec954d8db079739381abe579aee2de2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766346416321568471,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e00703bc6d857e7a94b8aa3578cd0ba,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container
.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e3f1b192dcb7acee686c577bf7a411a3d775b35627c76e70a7d5588ed42e81,PodSandboxId:dd2945bb4b694fb25f86847766b25f3c7a558ea7a9d2f93d575225c673771b39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766346416307836253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f17957e871bfb19e971
bde6d59acab,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70cbc562e70d050f91338c415852cd26b7e7f1fdea65d9883e7b97d79508e7a6,PodSandboxId:3a3934ff8e8846c95df0f460c16f082fc042910df69600567798ae6faea3e246,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766346416296051360,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a83fa93dc395b9c19eae8f42e5ac0af,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b83c8337-67f7-4552-bacd-bc6732abb512 name=/runtime.v1.RuntimeService/ListContainers
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.923573046Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c88df894-2ecb-4b9b-ab5d-50898cb7c652 name=/runtime.v1.RuntimeService/Version
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.923661187Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c88df894-2ecb-4b9b-ab5d-50898cb7c652 name=/runtime.v1.RuntimeService/Version
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.926485682Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f34eea82-0e3e-4e7c-a5ee-12384295183e name=/runtime.v1.ImageService/ImageFsInfo
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.927875096Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766346691927845160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551108,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f34eea82-0e3e-4e7c-a5ee-12384295183e name=/runtime.v1.ImageService/ImageFsInfo
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.930318133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b0a6f16-db72-47fa-ae44-d6d0cf8a31bb name=/runtime.v1.RuntimeService/ListContainers
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.930381178Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b0a6f16-db72-47fa-ae44-d6d0cf8a31bb name=/runtime.v1.RuntimeService/ListContainers
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.930656073Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24cbfb986d5c3ca3914fc9c982bc0f327cab22d1ddd350a5101dea571b531ae4,PodSandboxId:4a41ca16c86e05833fea9885e77582ab9e4210b58533c294dad1e98eb8c23e08,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766346551717301367,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33f3ec72-704c-4201-8ff2-47eac4b359fe,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae51e8a03b5795078d92e305dc7f1e5145cfd39ed842e2f6cd4495696e266f2,PodSandboxId:7888b298e1b5aa605057f03f82f11340b72fe1725c7675291c5bfa317e408079,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766346516355499449,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7f347285-8b81-4c24-9b59-da519e7b35b0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2ff9b28479efc2d0a0fc471665c02560315c9bd5ab4199b166f0948ed20421,PodSandboxId:d9ecfb70d267b99db7ebc525d5264fa9a7bbcbe02ed1a5b3440a3b1dbc5681cd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1766346507489445293,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-s7ffl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da33936d-a439-40f8-8c05-f7eb37c2a965,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b0d4f54935266cbc9e1f226622aaebfc1148b657bd58b134d2342d3e25a3f81c,PodSandboxId:826c23ce1e03655df44ee44d175bf9e26249c1c8bf7a6a2728bdff10b60fb9d0,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1766346475930855575,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xlmpc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c224b53b-30a7-455e-a46f-71e29fefeebd,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66cb039aacf2b693177d81815eac065ba13bfc5a764199e4c8195fd6b73e4e2e,PodSandboxId:d660c400d7e267153678575b2516776993a2d1b947acd15fe08199d37be2a12a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1766346474567680497,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5skzk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9804e45-e681-4a75-95bc-7d01cadcb23a,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2554c908ec62494980fa93068273fa8ea1b82eed4e4bd6c217c4322a493b009,PodSandboxId:b74c477f14c08fd8448306faeee5b5dafbc9d239ed6d56eac75a37a62b92ecf3,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1766346459113621590,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c506cde-8495-4847-95bb-99f92a15aeb1,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71a3f6b64ed7664ee86b4e70656a368783bc04438140179b672d70912ad173a,PodSandboxId:f10da7fce20989d8abbe13fca81725b8d162ae31e6c482b520a95f2d5a934b20,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1766346444128697039,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-96g9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae1a4e49-3725-4452-ade4-01b3af2dfe3f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:821adec83773446bd435ef05ab329e5d395b6617013fdb8fb83cfe0e620f4c54,PodSandboxId:92681fb3c28b7795a449b0f25cdffe478d433a56caeacac88293fbea4b4a9ee1,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766346439119695360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ccdeb0-0aa9-4509-9ca8-0d067721e67a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaf270f354b50d2a160ee904351fae497bcb13ccd6a6225ad9d4d85ddc5a653f,PodSandboxId:cd23883be15328c292099c0fdf315f96f42d7be535f9563b51624c899365501a,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766346428931047668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-26xrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85b7d7-8740-43f9-82ce-8e57e4f1a4d1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:944524acd2e98b5a8fbda9f53aa5af06093335f472b9c4739bf44311faf57c5f,PodSandboxId:377a6c7a47a554763ff617d6a99ad5dcf3bd6b6c7f46db37c6f1a3d8354c0436,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766346428043980106,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbvb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f81d5845-1ca3-4d59-b971-848c73663c2d,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:835c8c15bbf37d26aca711aed08532cb1b32be70b119565fe2f14cdba5136552,PodSandboxId:f3478431553a6f4f25bb09dff168fda89445871726fd8e3c5b24da2d1e74bb58,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766346416335911432,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8a4cda82f44637052e031d96df1f39,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5546673aec525016ac3db18f88a4fc01cedc9678c9eb422c032127aa209ca951,PodSandboxId:7a9712131b66b91e098794ea27db2bbd0ec954d8db079739381abe579aee2de2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766346416321568471,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e00703bc6d857e7a94b8aa3578cd0ba,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container
.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e3f1b192dcb7acee686c577bf7a411a3d775b35627c76e70a7d5588ed42e81,PodSandboxId:dd2945bb4b694fb25f86847766b25f3c7a558ea7a9d2f93d575225c673771b39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766346416307836253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f17957e871bfb19e971
bde6d59acab,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70cbc562e70d050f91338c415852cd26b7e7f1fdea65d9883e7b97d79508e7a6,PodSandboxId:3a3934ff8e8846c95df0f460c16f082fc042910df69600567798ae6faea3e246,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766346416296051360,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a83fa93dc395b9c19eae8f42e5ac0af,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b0a6f16-db72-47fa-ae44-d6d0cf8a31bb name=/runtime.v1.RuntimeService/ListContainers
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.932540215Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=c42d2696-2092-4a2f-88f9-c7976058c31e name=/runtime.v1.RuntimeService/ListPodSandbox
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.933517634Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b8ca4d2cd5b7e65811f69a161ce13c537a5e4ca4e7948d0356f617f4146f81d8,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-qfn7w,Uid:1432962d-567f-41c9-8e1a-86dc0ebcb6c5,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346691014061737,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-qfn7w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1432962d-567f-41c9-8e1a-86dc0ebcb6c5,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-21T19:51:30.697563944Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4a41ca16c86e05833fea9885e77582ab9e4210b58533c294dad1e98eb8c23e08,Metadata:&PodSandboxMetadata{Name:nginx,Uid:33f3ec72-704c-4201-8ff2-47eac4b359fe,Namespace:default,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1766346549215672076,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33f3ec72-704c-4201-8ff2-47eac4b359fe,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-21T19:49:08.898465472Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7888b298e1b5aa605057f03f82f11340b72fe1725c7675291c5bfa317e408079,Metadata:&PodSandboxMetadata{Name:busybox,Uid:7f347285-8b81-4c24-9b59-da519e7b35b0,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346513928437485,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7f347285-8b81-4c24-9b59-da519e7b35b0,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-21T19:48:33.605837987Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d9ecfb70d267b99db7ebc
525d5264fa9a7bbcbe02ed1a5b3440a3b1dbc5681cd,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-85d4c799dd-s7ffl,Uid:da33936d-a439-40f8-8c05-f7eb37c2a965,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346500318989097,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-s7ffl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da33936d-a439-40f8-8c05-f7eb37c2a965,pod-template-hash: 85d4c799dd,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-21T19:47:15.488185624Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:826c23ce1e03655df44ee44d175bf9e26249c1c8bf7a6a2728bdff10b60fb9d0,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-xlmpc,Uid:c224b53b-30a7-455e-a46f-71e29fefeebd,Namespace:ingress-nginx,Attempt:0,},St
ate:SANDBOX_NOTREADY,CreatedAt:1766346436937730430,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: a01aa6f7-e966-4492-ac72-e5e3ceabae8a,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: a01aa6f7-e966-4492-ac72-e5e3ceabae8a,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-xlmpc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c224b53b-30a7-455e-a46f-71e29fefeebd,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-21T19:47:15.571700364Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d660c400d7e267153678575b2516776993a2d1b947acd15fe08199d37be2a12a,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-5skzk,Uid:c9804e45-e681-4a75-95bc-7d01cadcb23a,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,Crea
tedAt:1766346436872108949,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 8986cafc-6e33-4101-a110-6119660391f7,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: 8986cafc-6e33-4101-a110-6119660391f7,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-5skzk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9804e45-e681-4a75-95bc-7d01cadcb23a,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-21T19:47:15.553493955Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:92681fb3c28b7795a449b0f25cdffe478d433a56caeacac88293fbea4b4a9ee1,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:97ccdeb0-0aa9-4509-9ca8-0d067721e67a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346435865654862,Labels:map[string]
string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ccdeb0-0aa9-4509-9ca8-0d067721e67a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io
/config.seen: 2025-12-21T19:47:13.538961128Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b74c477f14c08fd8448306faeee5b5dafbc9d239ed6d56eac75a37a62b92ecf3,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:4c506cde-8495-4847-95bb-99f92a15aeb1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346433760756713,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c506cde-8495-4847-95bb-99f92a15aeb1,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":
\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"hostPort\":53,\"protocol\":\"UDP\"}],\"volumeMounts\":[{\"mountPath\":\"/config\",\"name\":\"minikube-ingress-dns-config-volume\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\",\"volumes\":[{\"configMap\":{\"name\":\"minikube-ingress-dns\"},\"name\":\"minikube-ingress-dns-config-volume\"}]}}\n,kubernetes.io/config.seen: 2025-12-21T19:47:13.385535624Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f10da7fce20989d8abbe13fca81725b8d162ae31e6c482b520a95f2d5a934b20,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-96g9f,Uid:ae1a4e49-3725-4452-ade4-01b3af2dfe3f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:176634643116959
3450,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-96g9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae1a4e49-3725-4452-ade4-01b3af2dfe3f,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-21T19:47:10.815377584Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cd23883be15328c292099c0fdf315f96f42d7be535f9563b51624c899365501a,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-26xrr,Uid:dc85b7d7-8740-43f9-82ce-8e57e4f1a4d1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346427937419586,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-26xrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85b7d7-8740-43f9-82ce-8e57e4f1a4d1,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[st
ring]string{kubernetes.io/config.seen: 2025-12-21T19:47:07.553463424Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:377a6c7a47a554763ff617d6a99ad5dcf3bd6b6c7f46db37c6f1a3d8354c0436,Metadata:&PodSandboxMetadata{Name:kube-proxy-fbvb9,Uid:f81d5845-1ca3-4d59-b971-848c73663c2d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346427460627319,Labels:map[string]string{controller-revision-hash: 55c7cb7b75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fbvb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f81d5845-1ca3-4d59-b971-848c73663c2d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-21T19:47:07.107513094Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7a9712131b66b91e098794ea27db2bbd0ec954d8db079739381abe579aee2de2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-659513,Uid:3e00703bc6d857e7a94b8aa3578cd0ba,Namespace:kube-system,Attemp
t:0,},State:SANDBOX_READY,CreatedAt:1766346416088659150,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e00703bc6d857e7a94b8aa3578cd0ba,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3e00703bc6d857e7a94b8aa3578cd0ba,kubernetes.io/config.seen: 2025-12-21T19:46:55.473412251Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3a3934ff8e8846c95df0f460c16f082fc042910df69600567798ae6faea3e246,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-659513,Uid:6a83fa93dc395b9c19eae8f42e5ac0af,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346416086460714,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a83fa93dc395b9c19eae8f42
e5ac0af,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6a83fa93dc395b9c19eae8f42e5ac0af,kubernetes.io/config.seen: 2025-12-21T19:46:55.473413132Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dd2945bb4b694fb25f86847766b25f3c7a558ea7a9d2f93d575225c673771b39,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-659513,Uid:f5f17957e871bfb19e971bde6d59acab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346416073951811,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f17957e871bfb19e971bde6d59acab,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.164:8443,kubernetes.io/config.hash: f5f17957e871bfb19e971bde6d59acab,kubernetes.io/config.seen: 2025-12-21T19:46:55.473410502Z,kubernetes.io/config.source: file,},Ru
ntimeHandler:,},&PodSandbox{Id:f3478431553a6f4f25bb09dff168fda89445871726fd8e3c5b24da2d1e74bb58,Metadata:&PodSandboxMetadata{Name:etcd-addons-659513,Uid:9f8a4cda82f44637052e031d96df1f39,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346416072379807,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8a4cda82f44637052e031d96df1f39,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.164:2379,kubernetes.io/config.hash: 9f8a4cda82f44637052e031d96df1f39,kubernetes.io/config.seen: 2025-12-21T19:46:55.473406852Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c42d2696-2092-4a2f-88f9-c7976058c31e name=/runtime.v1.RuntimeService/ListPodSandbox
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.934869518Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b75e2cd0-c6c7-4b47-9c7e-656ae6168a30 name=/runtime.v1.RuntimeService/ListContainers
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.935187785Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b75e2cd0-c6c7-4b47-9c7e-656ae6168a30 name=/runtime.v1.RuntimeService/ListContainers
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.935696570Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24cbfb986d5c3ca3914fc9c982bc0f327cab22d1ddd350a5101dea571b531ae4,PodSandboxId:4a41ca16c86e05833fea9885e77582ab9e4210b58533c294dad1e98eb8c23e08,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766346551717301367,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33f3ec72-704c-4201-8ff2-47eac4b359fe,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae51e8a03b5795078d92e305dc7f1e5145cfd39ed842e2f6cd4495696e266f2,PodSandboxId:7888b298e1b5aa605057f03f82f11340b72fe1725c7675291c5bfa317e408079,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766346516355499449,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7f347285-8b81-4c24-9b59-da519e7b35b0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2ff9b28479efc2d0a0fc471665c02560315c9bd5ab4199b166f0948ed20421,PodSandboxId:d9ecfb70d267b99db7ebc525d5264fa9a7bbcbe02ed1a5b3440a3b1dbc5681cd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1766346507489445293,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-s7ffl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da33936d-a439-40f8-8c05-f7eb37c2a965,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b0d4f54935266cbc9e1f226622aaebfc1148b657bd58b134d2342d3e25a3f81c,PodSandboxId:826c23ce1e03655df44ee44d175bf9e26249c1c8bf7a6a2728bdff10b60fb9d0,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1766346475930855575,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xlmpc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c224b53b-30a7-455e-a46f-71e29fefeebd,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66cb039aacf2b693177d81815eac065ba13bfc5a764199e4c8195fd6b73e4e2e,PodSandboxId:d660c400d7e267153678575b2516776993a2d1b947acd15fe08199d37be2a12a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1766346474567680497,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5skzk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9804e45-e681-4a75-95bc-7d01cadcb23a,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2554c908ec62494980fa93068273fa8ea1b82eed4e4bd6c217c4322a493b009,PodSandboxId:b74c477f14c08fd8448306faeee5b5dafbc9d239ed6d56eac75a37a62b92ecf3,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1766346459113621590,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c506cde-8495-4847-95bb-99f92a15aeb1,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71a3f6b64ed7664ee86b4e70656a368783bc04438140179b672d70912ad173a,PodSandboxId:f10da7fce20989d8abbe13fca81725b8d162ae31e6c482b520a95f2d5a934b20,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1766346444128697039,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-96g9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae1a4e49-3725-4452-ade4-01b3af2dfe3f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:821adec83773446bd435ef05ab329e5d395b6617013fdb8fb83cfe0e620f4c54,PodSandboxId:92681fb3c28b7795a449b0f25cdffe478d433a56caeacac88293fbea4b4a9ee1,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766346439119695360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ccdeb0-0aa9-4509-9ca8-0d067721e67a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaf270f354b50d2a160ee904351fae497bcb13ccd6a6225ad9d4d85ddc5a653f,PodSandboxId:cd23883be15328c292099c0fdf315f96f42d7be535f9563b51624c899365501a,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766346428931047668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-26xrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85b7d7-8740-43f9-82ce-8e57e4f1a4d1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:944524acd2e98b5a8fbda9f53aa5af06093335f472b9c4739bf44311faf57c5f,PodSandboxId:377a6c7a47a554763ff617d6a99ad5dcf3bd6b6c7f46db37c6f1a3d8354c0436,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766346428043980106,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbvb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f81d5845-1ca3-4d59-b971-848c73663c2d,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:835c8c15bbf37d26aca711aed08532cb1b32be70b119565fe2f14cdba5136552,PodSandboxId:f3478431553a6f4f25bb09dff168fda89445871726fd8e3c5b24da2d1e74bb58,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766346416335911432,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8a4cda82f44637052e031d96df1f39,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5546673aec525016ac3db18f88a4fc01cedc9678c9eb422c032127aa209ca951,PodSandboxId:7a9712131b66b91e098794ea27db2bbd0ec954d8db079739381abe579aee2de2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766346416321568471,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e00703bc6d857e7a94b8aa3578cd0ba,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container
.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e3f1b192dcb7acee686c577bf7a411a3d775b35627c76e70a7d5588ed42e81,PodSandboxId:dd2945bb4b694fb25f86847766b25f3c7a558ea7a9d2f93d575225c673771b39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766346416307836253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f17957e871bfb19e971
bde6d59acab,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70cbc562e70d050f91338c415852cd26b7e7f1fdea65d9883e7b97d79508e7a6,PodSandboxId:3a3934ff8e8846c95df0f460c16f082fc042910df69600567798ae6faea3e246,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766346416296051360,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a83fa93dc395b9c19eae8f42e5ac0af,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b75e2cd0-c6c7-4b47-9c7e-656ae6168a30 name=/runtime.v1.RuntimeService/ListContainers
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.937457009Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 1432962d-567f-41c9-8e1a-86dc0ebcb6c5,},},}" file="otel-collector/interceptors.go:62" id=e4addf13-e601-42af-b23c-299b432082b7 name=/runtime.v1.RuntimeService/ListPodSandbox
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.937556367Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b8ca4d2cd5b7e65811f69a161ce13c537a5e4ca4e7948d0356f617f4146f81d8,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-qfn7w,Uid:1432962d-567f-41c9-8e1a-86dc0ebcb6c5,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346691014061737,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-qfn7w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1432962d-567f-41c9-8e1a-86dc0ebcb6c5,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-21T19:51:30.697563944Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e4addf13-e601-42af-b23c-299b432082b7 name=/runtime.v1.RuntimeService/ListPodSandbox
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.938738911Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:b8ca4d2cd5b7e65811f69a161ce13c537a5e4ca4e7948d0356f617f4146f81d8,Verbose:false,}" file="otel-collector/interceptors.go:62" id=56438b81-da3e-48fc-a1db-a43529a4c9d6 name=/runtime.v1.RuntimeService/PodSandboxStatus
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.939781808Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:b8ca4d2cd5b7e65811f69a161ce13c537a5e4ca4e7948d0356f617f4146f81d8,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-qfn7w,Uid:1432962d-567f-41c9-8e1a-86dc0ebcb6c5,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346691014061737,Network:&PodSandboxNetworkStatus{Ip:10.244.0.33,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:&UserNamespace{Mode:NODE,Uids:[]*IDMapping{},Gids:[]*IDMapping{},},},},},Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-qfn7w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1432962d-567f-41c9-8e1a-86dc0ebcb6c5,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen:
2025-12-21T19:51:30.697563944Z,kubernetes.io/config.source: api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=56438b81-da3e-48fc-a1db-a43529a4c9d6 name=/runtime.v1.RuntimeService/PodSandboxStatus
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.940374246Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 1432962d-567f-41c9-8e1a-86dc0ebcb6c5,},},}" file="otel-collector/interceptors.go:62" id=6aafdb4c-36c7-4b0a-b2ad-cdcba25b507d name=/runtime.v1.RuntimeService/ListContainers
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.940660192Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6aafdb4c-36c7-4b0a-b2ad-cdcba25b507d name=/runtime.v1.RuntimeService/ListContainers
Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.941885050Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6aafdb4c-36c7-4b0a-b2ad-cdcba25b507d name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
24cbfb986d5c3 public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c 2 minutes ago Running nginx 0 4a41ca16c86e0 nginx default
8ae51e8a03b57 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 2 minutes ago Running busybox 0 7888b298e1b5a busybox default
1d2ff9b28479e registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad 3 minutes ago Running controller 0 d9ecfb70d267b ingress-nginx-controller-85d4c799dd-s7ffl ingress-nginx
b0d4f54935266 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285 3 minutes ago Exited patch 0 826c23ce1e036 ingress-nginx-admission-patch-xlmpc ingress-nginx
66cb039aacf2b registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285 3 minutes ago Exited create 0 d660c400d7e26 ingress-nginx-admission-create-5skzk ingress-nginx
d2554c908ec62 docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 3 minutes ago Running minikube-ingress-dns 0 b74c477f14c08 kube-ingress-dns-minikube kube-system
d71a3f6b64ed7 docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 f10da7fce2098 amd-gpu-device-plugin-96g9f kube-system
821adec837734 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 92681fb3c28b7 storage-provisioner kube-system
aaf270f354b50 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 cd23883be1532 coredns-66bc5c9577-26xrr kube-system
944524acd2e98 36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691 4 minutes ago Running kube-proxy 0 377a6c7a47a55 kube-proxy-fbvb9 kube-system
835c8c15bbf37 a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1 4 minutes ago Running etcd 0 f3478431553a6 etcd-addons-659513 kube-system
5546673aec525 5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942 4 minutes ago Running kube-controller-manager 0 7a9712131b66b kube-controller-manager-addons-659513 kube-system
51e3f1b192dcb aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c 4 minutes ago Running kube-apiserver 0 dd2945bb4b694 kube-apiserver-addons-659513 kube-system
70cbc562e70d0 aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78 4 minutes ago Running kube-scheduler 0 3a3934ff8e884 kube-scheduler-addons-659513 kube-system
==> coredns [aaf270f354b50d2a160ee904351fae497bcb13ccd6a6225ad9d4d85ddc5a653f] <==
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
[INFO] Reloading complete
[INFO] 127.0.0.1:57736 - 5910 "HINFO IN 8626009017774707841.5089829701906143058. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026395123s
[INFO] 10.244.0.23:51803 - 45161 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000652122s
[INFO] 10.244.0.23:50720 - 43576 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000652323s
[INFO] 10.244.0.23:44305 - 16590 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000188968s
[INFO] 10.244.0.23:55430 - 11072 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000119974s
[INFO] 10.244.0.23:36131 - 61271 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000099275s
[INFO] 10.244.0.23:43585 - 57886 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125965s
[INFO] 10.244.0.23:39338 - 26550 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000980255s
[INFO] 10.244.0.23:48366 - 54851 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.006955981s
[INFO] 10.244.0.26:60660 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000342492s
[INFO] 10.244.0.26:49797 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000301813s
==> describe nodes <==
Name: addons-659513
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-659513
kubernetes.io/os=linux
minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
minikube.k8s.io/name=addons-659513
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_21T19_47_02_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-659513
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 21 Dec 2025 19:46:59 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-659513
AcquireTime: <unset>
RenewTime: Sun, 21 Dec 2025 19:51:27 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sun, 21 Dec 2025 19:49:35 +0000 Sun, 21 Dec 2025 19:46:57 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 21 Dec 2025 19:49:35 +0000 Sun, 21 Dec 2025 19:46:57 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 21 Dec 2025 19:49:35 +0000 Sun, 21 Dec 2025 19:46:57 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 21 Dec 2025 19:49:35 +0000 Sun, 21 Dec 2025 19:47:02 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.164
Hostname: addons-659513
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
System Info:
Machine ID: 536fbf6298e14d4fbd81908693d32210
System UUID: 536fbf62-98e1-4d4f-bd81-908693d32210
Boot ID: 06755251-81a9-43ca-b220-4e3471a1e4b0
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.3
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m59s
default hello-world-app-5d498dc89-qfn7w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m24s
ingress-nginx ingress-nginx-controller-85d4c799dd-s7ffl 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m17s
kube-system amd-gpu-device-plugin-96g9f 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m22s
kube-system coredns-66bc5c9577-26xrr 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m25s
kube-system etcd-addons-659513 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m31s
kube-system kube-apiserver-addons-659513 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m31s
kube-system kube-controller-manager-addons-659513 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m31s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m19s
kube-system kube-proxy-fbvb9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m25s
kube-system kube-scheduler-addons-659513 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m31s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m19s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m22s kube-proxy
Normal Starting 4m37s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m37s (x8 over 4m37s) kubelet Node addons-659513 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m37s (x8 over 4m37s) kubelet Node addons-659513 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m37s (x7 over 4m37s) kubelet Node addons-659513 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m37s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m31s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m31s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m31s kubelet Node addons-659513 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m31s kubelet Node addons-659513 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m31s kubelet Node addons-659513 status is now: NodeHasSufficientPID
Normal NodeReady 4m30s kubelet Node addons-659513 status is now: NodeReady
Normal RegisteredNode 4m26s node-controller Node addons-659513 event: Registered Node addons-659513 in Controller
==> dmesg <==
[ +0.235360] kauditd_printk_skb: 18 callbacks suppressed
[ +0.000190] kauditd_printk_skb: 318 callbacks suppressed
[ +0.752884] kauditd_printk_skb: 302 callbacks suppressed
[ +2.679842] kauditd_printk_skb: 395 callbacks suppressed
[ +5.111331] kauditd_printk_skb: 20 callbacks suppressed
[ +8.571017] kauditd_printk_skb: 17 callbacks suppressed
[ +6.031183] kauditd_printk_skb: 26 callbacks suppressed
[ +8.026802] kauditd_printk_skb: 113 callbacks suppressed
[ +1.032100] kauditd_printk_skb: 109 callbacks suppressed
[Dec21 19:48] kauditd_printk_skb: 82 callbacks suppressed
[ +4.328318] kauditd_printk_skb: 112 callbacks suppressed
[ +0.000026] kauditd_printk_skb: 5 callbacks suppressed
[ +0.989593] kauditd_printk_skb: 50 callbacks suppressed
[ +5.052907] kauditd_printk_skb: 47 callbacks suppressed
[ +2.427301] kauditd_printk_skb: 32 callbacks suppressed
[ +9.606133] kauditd_printk_skb: 17 callbacks suppressed
[ +6.036443] kauditd_printk_skb: 22 callbacks suppressed
[ +4.769446] kauditd_printk_skb: 59 callbacks suppressed
[Dec21 19:49] kauditd_printk_skb: 42 callbacks suppressed
[ +0.824977] kauditd_printk_skb: 184 callbacks suppressed
[ +4.079920] kauditd_printk_skb: 153 callbacks suppressed
[ +8.332087] kauditd_printk_skb: 157 callbacks suppressed
[ +0.000028] kauditd_printk_skb: 42 callbacks suppressed
[ +5.164027] kauditd_printk_skb: 61 callbacks suppressed
[Dec21 19:51] kauditd_printk_skb: 127 callbacks suppressed
==> etcd [835c8c15bbf37d26aca711aed08532cb1b32be70b119565fe2f14cdba5136552] <==
{"level":"info","ts":"2025-12-21T19:47:50.950187Z","caller":"traceutil/trace.go:172","msg":"trace[1909800173] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1009; }","duration":"186.796996ms","start":"2025-12-21T19:47:50.763381Z","end":"2025-12-21T19:47:50.950178Z","steps":["trace[1909800173] 'agreement among raft nodes before linearized reading' (duration: 185.309745ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-21T19:47:50.948275Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.624662ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-21T19:47:50.950302Z","caller":"traceutil/trace.go:172","msg":"trace[81494307] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1009; }","duration":"123.652116ms","start":"2025-12-21T19:47:50.826641Z","end":"2025-12-21T19:47:50.950293Z","steps":["trace[81494307] 'agreement among raft nodes before linearized reading' (duration: 121.60336ms)"],"step_count":1}
{"level":"info","ts":"2025-12-21T19:47:52.995299Z","caller":"traceutil/trace.go:172","msg":"trace[454392188] linearizableReadLoop","detail":"{readStateIndex:1035; appliedIndex:1035; }","duration":"229.3465ms","start":"2025-12-21T19:47:52.765935Z","end":"2025-12-21T19:47:52.995282Z","steps":["trace[454392188] 'read index received' (duration: 229.341257ms)","trace[454392188] 'applied index is now lower than readState.Index' (duration: 3.998µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-21T19:47:52.995429Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"229.463876ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-21T19:47:52.995449Z","caller":"traceutil/trace.go:172","msg":"trace[901148205] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1012; }","duration":"229.513172ms","start":"2025-12-21T19:47:52.765931Z","end":"2025-12-21T19:47:52.995444Z","steps":["trace[901148205] 'agreement among raft nodes before linearized reading' (duration: 229.439705ms)"],"step_count":1}
{"level":"info","ts":"2025-12-21T19:47:52.995460Z","caller":"traceutil/trace.go:172","msg":"trace[694933170] transaction","detail":"{read_only:false; response_revision:1013; number_of_response:1; }","duration":"244.701316ms","start":"2025-12-21T19:47:52.750746Z","end":"2025-12-21T19:47:52.995447Z","steps":["trace[694933170] 'process raft request' (duration: 244.616547ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-21T19:47:52.995636Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"168.212298ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-21T19:47:52.995653Z","caller":"traceutil/trace.go:172","msg":"trace[648566919] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1013; }","duration":"168.232054ms","start":"2025-12-21T19:47:52.827416Z","end":"2025-12-21T19:47:52.995649Z","steps":["trace[648566919] 'agreement among raft nodes before linearized reading' (duration: 168.193401ms)"],"step_count":1}
{"level":"info","ts":"2025-12-21T19:48:25.660716Z","caller":"traceutil/trace.go:172","msg":"trace[190185239] transaction","detail":"{read_only:false; response_revision:1171; number_of_response:1; }","duration":"252.757518ms","start":"2025-12-21T19:48:25.407933Z","end":"2025-12-21T19:48:25.660691Z","steps":["trace[190185239] 'process raft request' (duration: 252.656644ms)"],"step_count":1}
{"level":"info","ts":"2025-12-21T19:48:27.327463Z","caller":"traceutil/trace.go:172","msg":"trace[1509538383] transaction","detail":"{read_only:false; response_revision:1173; number_of_response:1; }","duration":"198.407293ms","start":"2025-12-21T19:48:27.129044Z","end":"2025-12-21T19:48:27.327451Z","steps":["trace[1509538383] 'process raft request' (duration: 198.312852ms)"],"step_count":1}
{"level":"info","ts":"2025-12-21T19:48:27.328232Z","caller":"traceutil/trace.go:172","msg":"trace[1565997657] linearizableReadLoop","detail":"{readStateIndex:1203; appliedIndex:1204; }","duration":"186.262719ms","start":"2025-12-21T19:48:27.141783Z","end":"2025-12-21T19:48:27.328046Z","steps":["trace[1565997657] 'read index received' (duration: 186.255953ms)","trace[1565997657] 'applied index is now lower than readState.Index' (duration: 5.648µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-21T19:48:27.328546Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"186.689782ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-21T19:48:27.328647Z","caller":"traceutil/trace.go:172","msg":"trace[215539254] range","detail":"{range_begin:/registry/replicasets; range_end:; response_count:0; response_revision:1173; }","duration":"186.801442ms","start":"2025-12-21T19:48:27.141779Z","end":"2025-12-21T19:48:27.328581Z","steps":["trace[215539254] 'agreement among raft nodes before linearized reading' (duration: 186.670911ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-21T19:48:27.330027Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"149.528724ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-21T19:48:27.330082Z","caller":"traceutil/trace.go:172","msg":"trace[847561110] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1173; }","duration":"149.628219ms","start":"2025-12-21T19:48:27.180439Z","end":"2025-12-21T19:48:27.330067Z","steps":["trace[847561110] 'agreement among raft nodes before linearized reading' (duration: 148.461363ms)"],"step_count":1}
{"level":"info","ts":"2025-12-21T19:48:56.468264Z","caller":"traceutil/trace.go:172","msg":"trace[203431017] linearizableReadLoop","detail":"{readStateIndex:1391; appliedIndex:1391; }","duration":"166.219821ms","start":"2025-12-21T19:48:56.301985Z","end":"2025-12-21T19:48:56.468205Z","steps":["trace[203431017] 'read index received' (duration: 166.207539ms)","trace[203431017] 'applied index is now lower than readState.Index' (duration: 8.328µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-21T19:48:56.470032Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"168.026643ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-21T19:48:56.470323Z","caller":"traceutil/trace.go:172","msg":"trace[468893441] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1354; }","duration":"168.330966ms","start":"2025-12-21T19:48:56.301980Z","end":"2025-12-21T19:48:56.470311Z","steps":["trace[468893441] 'agreement among raft nodes before linearized reading' (duration: 166.400235ms)"],"step_count":1}
{"level":"info","ts":"2025-12-21T19:49:02.208944Z","caller":"traceutil/trace.go:172","msg":"trace[1291651771] linearizableReadLoop","detail":"{readStateIndex:1438; appliedIndex:1438; }","duration":"305.318422ms","start":"2025-12-21T19:49:01.903609Z","end":"2025-12-21T19:49:02.208927Z","steps":["trace[1291651771] 'read index received' (duration: 305.313273ms)","trace[1291651771] 'applied index is now lower than readState.Index' (duration: 4.382µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-21T19:49:02.209060Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"305.436304ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-21T19:49:02.209079Z","caller":"traceutil/trace.go:172","msg":"trace[789901411] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1396; }","duration":"305.466459ms","start":"2025-12-21T19:49:01.903606Z","end":"2025-12-21T19:49:02.209072Z","steps":["trace[789901411] 'agreement among raft nodes before linearized reading' (duration: 305.407736ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-21T19:49:02.209103Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-21T19:49:01.903590Z","time spent":"305.50263ms","remote":"127.0.0.1:35478","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
{"level":"info","ts":"2025-12-21T19:49:02.209931Z","caller":"traceutil/trace.go:172","msg":"trace[1606590807] transaction","detail":"{read_only:false; response_revision:1397; number_of_response:1; }","duration":"344.388785ms","start":"2025-12-21T19:49:01.865533Z","end":"2025-12-21T19:49:02.209922Z","steps":["trace[1606590807] 'process raft request' (duration: 343.887894ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-21T19:49:02.210454Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-21T19:49:01.865518Z","time spent":"344.862199ms","remote":"127.0.0.1:35830","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1395 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
==> kernel <==
19:51:32 up 5 min, 0 users, load average: 0.62, 1.14, 0.59
Linux addons-659513 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Dec 20 21:36:01 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [51e3f1b192dcb7acee686c577bf7a411a3d775b35627c76e70a7d5588ed42e81] <==
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
E1221 19:47:48.673666 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.125.156:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.125.156:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.125.156:443: connect: connection refused" logger="UnhandledError"
I1221 19:47:48.710859 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1221 19:47:48.743198 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
E1221 19:48:42.806457 1 conn.go:339] Error on socket receive: read tcp 192.168.39.164:8443->192.168.39.1:41020: use of closed network connection
E1221 19:48:43.002569 1 conn.go:339] Error on socket receive: read tcp 192.168.39.164:8443->192.168.39.1:41058: use of closed network connection
I1221 19:48:52.220073 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.196.75"}
I1221 19:49:08.680928 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1221 19:49:08.936535 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.21.239"}
I1221 19:49:24.077689 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
E1221 19:49:30.939397 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
I1221 19:49:49.688187 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1221 19:49:52.134678 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1221 19:49:52.134793 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1221 19:49:52.166833 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1221 19:49:52.166882 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1221 19:49:52.196769 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1221 19:49:52.197108 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1221 19:49:52.227673 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1221 19:49:52.227917 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1221 19:49:53.169835 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1221 19:49:53.227991 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1221 19:49:53.244287 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I1221 19:51:30.787256 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.25.237"}
==> kube-controller-manager [5546673aec525016ac3db18f88a4fc01cedc9678c9eb422c032127aa209ca951] <==
E1221 19:50:02.010924 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1221 19:50:02.716710 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1221 19:50:02.717786 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
I1221 19:50:07.379357 1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
I1221 19:50:07.379403 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1221 19:50:07.464387 1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
I1221 19:50:07.464440 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1221 19:50:07.498260 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1221 19:50:07.500165 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1221 19:50:13.370720 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1221 19:50:13.371779 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1221 19:50:14.254564 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1221 19:50:14.255621 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1221 19:50:30.507175 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1221 19:50:30.508338 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1221 19:50:32.906192 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1221 19:50:32.907233 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1221 19:50:37.036337 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1221 19:50:37.037334 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1221 19:51:05.868816 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1221 19:51:05.869967 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1221 19:51:11.777474 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1221 19:51:11.778508 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1221 19:51:13.406661 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1221 19:51:13.407739 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [944524acd2e98b5a8fbda9f53aa5af06093335f472b9c4739bf44311faf57c5f] <==
I1221 19:47:08.800911 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1221 19:47:08.902022 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1221 19:47:08.903239 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.164"]
E1221 19:47:08.907177 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1221 19:47:09.200019 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1221 19:47:09.200170 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1221 19:47:09.200205 1 server_linux.go:132] "Using iptables Proxier"
I1221 19:47:09.313047 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1221 19:47:09.313375 1 server.go:527] "Version info" version="v1.34.3"
I1221 19:47:09.313407 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1221 19:47:09.333398 1 config.go:200] "Starting service config controller"
I1221 19:47:09.333430 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1221 19:47:09.333455 1 config.go:106] "Starting endpoint slice config controller"
I1221 19:47:09.333459 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1221 19:47:09.333467 1 config.go:403] "Starting serviceCIDR config controller"
I1221 19:47:09.333471 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1221 19:47:09.335614 1 config.go:309] "Starting node config controller"
I1221 19:47:09.336110 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1221 19:47:09.434004 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1221 19:47:09.434091 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1221 19:47:09.434166 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1221 19:47:09.437060 1 shared_informer.go:356] "Caches are synced" controller="node config"
==> kube-scheduler [70cbc562e70d050f91338c415852cd26b7e7f1fdea65d9883e7b97d79508e7a6] <==
E1221 19:46:59.321104 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1221 19:46:59.322575 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1221 19:46:59.321387 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1221 19:46:59.322885 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1221 19:46:59.322949 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1221 19:46:59.323260 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1221 19:46:59.323270 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1221 19:46:59.323372 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1221 19:46:59.323414 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1221 19:46:59.323448 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1221 19:46:59.323489 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1221 19:47:00.149079 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1221 19:47:00.158573 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1221 19:47:00.187880 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1221 19:47:00.259793 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1221 19:47:00.269103 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1221 19:47:00.300173 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1221 19:47:00.346695 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1221 19:47:00.378969 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1221 19:47:00.478854 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1221 19:47:00.479688 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1221 19:47:00.500232 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1221 19:47:00.544850 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1221 19:47:00.580528 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
I1221 19:47:02.404197 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Dec 21 19:50:01 addons-659513 kubelet[1504]: E1221 19:50:01.993953 1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766346601993582438 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551108} inodes_used:{value:196}}"
Dec 21 19:50:02 addons-659513 kubelet[1504]: I1221 19:50:02.936214 1504 scope.go:117] "RemoveContainer" containerID="e2ad0089f2b30d3fc3c0b40b208508e9d62daa0110ac9b3c4d232f45be2a0c23"
Dec 21 19:50:03 addons-659513 kubelet[1504]: I1221 19:50:03.059744 1504 scope.go:117] "RemoveContainer" containerID="8d77481da0af0050129321a6ed21d1c2cb789c13cd476c83208983d9086e5c0f"
Dec 21 19:50:03 addons-659513 kubelet[1504]: I1221 19:50:03.180268 1504 scope.go:117] "RemoveContainer" containerID="1f6aaf2c36d5ff744f0e3820d2eedd7f1f39eb88e5e0935dff55980a0b590697"
Dec 21 19:50:11 addons-659513 kubelet[1504]: E1221 19:50:11.997519 1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766346611997066221 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551108} inodes_used:{value:196}}"
Dec 21 19:50:11 addons-659513 kubelet[1504]: E1221 19:50:11.997540 1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766346611997066221 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551108} inodes_used:{value:196}}"
Dec 21 19:50:22 addons-659513 kubelet[1504]: E1221 19:50:22.000481 1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766346622000107996 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551108} inodes_used:{value:196}}"
Dec 21 19:50:22 addons-659513 kubelet[1504]: E1221 19:50:22.000505 1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766346622000107996 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551108} inodes_used:{value:196}}"
Dec 21 19:50:32 addons-659513 kubelet[1504]: E1221 19:50:32.004057 1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766346632003675120 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551108} inodes_used:{value:196}}"
Dec 21 19:50:32 addons-659513 kubelet[1504]: E1221 19:50:32.004101 1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766346632003675120 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551108} inodes_used:{value:196}}"
Dec 21 19:50:42 addons-659513 kubelet[1504]: E1221 19:50:42.007092 1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766346642006597719 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551108} inodes_used:{value:196}}"
Dec 21 19:50:42 addons-659513 kubelet[1504]: E1221 19:50:42.007183 1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766346642006597719 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551108} inodes_used:{value:196}}"
Dec 21 19:50:52 addons-659513 kubelet[1504]: E1221 19:50:52.010524 1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766346652010049684 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551108} inodes_used:{value:196}}"
Dec 21 19:50:52 addons-659513 kubelet[1504]: E1221 19:50:52.010566 1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766346652010049684 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551108} inodes_used:{value:196}}"
Dec 21 19:50:59 addons-659513 kubelet[1504]: I1221 19:50:59.822897 1504 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-96g9f" secret="" err="secret \"gcp-auth\" not found"
Dec 21 19:51:02 addons-659513 kubelet[1504]: E1221 19:51:02.014247 1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766346662013673269 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551108} inodes_used:{value:196}}"
Dec 21 19:51:02 addons-659513 kubelet[1504]: E1221 19:51:02.014274 1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766346662013673269 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551108} inodes_used:{value:196}}"
Dec 21 19:51:12 addons-659513 kubelet[1504]: E1221 19:51:12.017839 1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766346672017348490 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551108} inodes_used:{value:196}}"
Dec 21 19:51:12 addons-659513 kubelet[1504]: E1221 19:51:12.017889 1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766346672017348490 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551108} inodes_used:{value:196}}"
Dec 21 19:51:22 addons-659513 kubelet[1504]: E1221 19:51:22.020899 1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766346682020545685 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551108} inodes_used:{value:196}}"
Dec 21 19:51:22 addons-659513 kubelet[1504]: E1221 19:51:22.020940 1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766346682020545685 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551108} inodes_used:{value:196}}"
Dec 21 19:51:24 addons-659513 kubelet[1504]: I1221 19:51:24.823054 1504 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Dec 21 19:51:30 addons-659513 kubelet[1504]: I1221 19:51:30.799906 1504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvf6w\" (UniqueName: \"kubernetes.io/projected/1432962d-567f-41c9-8e1a-86dc0ebcb6c5-kube-api-access-zvf6w\") pod \"hello-world-app-5d498dc89-qfn7w\" (UID: \"1432962d-567f-41c9-8e1a-86dc0ebcb6c5\") " pod="default/hello-world-app-5d498dc89-qfn7w"
Dec 21 19:51:32 addons-659513 kubelet[1504]: E1221 19:51:32.030733 1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766346692029005131 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:559714} inodes_used:{value:201}}"
Dec 21 19:51:32 addons-659513 kubelet[1504]: E1221 19:51:32.030755 1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766346692029005131 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:559714} inodes_used:{value:201}}"
==> storage-provisioner [821adec83773446bd435ef05ab329e5d395b6617013fdb8fb83cfe0e620f4c54] <==
W1221 19:51:06.914413 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:08.917744 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:08.922918 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:10.926783 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:10.932454 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:12.936454 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:12.941339 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:14.945217 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:14.953028 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:16.955862 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:16.961598 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:18.966580 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:18.974022 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:20.977779 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:20.983304 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:22.988088 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:22.993509 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:24.997795 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:25.003091 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:27.006324 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:27.014609 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:29.018277 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:29.023520 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:31.040095 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1221 19:51:31.058424 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-659513 -n addons-659513
helpers_test.go:270: (dbg) Run: kubectl --context addons-659513 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-5skzk ingress-nginx-admission-patch-xlmpc
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run: kubectl --context addons-659513 describe pod ingress-nginx-admission-create-5skzk ingress-nginx-admission-patch-xlmpc
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-659513 describe pod ingress-nginx-admission-create-5skzk ingress-nginx-admission-patch-xlmpc: exit status 1 (59.133073ms)
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-5skzk" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-xlmpc" not found
** /stderr **
helpers_test.go:288: kubectl --context addons-659513 describe pod ingress-nginx-admission-create-5skzk ingress-nginx-admission-patch-xlmpc: exit status 1
addons_test.go:1055: (dbg) Run: out/minikube-linux-amd64 -p addons-659513 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run: out/minikube-linux-amd64 -p addons-659513 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-659513 addons disable ingress --alsologtostderr -v=1: (7.715410145s)
--- FAIL: TestAddons/parallel/Ingress (153.33s)