=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run: kubectl --context addons-153066 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run: kubectl --context addons-153066 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run: kubectl --context addons-153066 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [b42be6a9-0973-4607-a39f-f43345bc18fe] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [b42be6a9-0973-4607-a39f-f43345bc18fe] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.004075282s
I1216 04:29:15.134377 8987 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run: out/minikube-linux-amd64 -p addons-153066 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-153066 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.632813885s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run: kubectl --context addons-153066 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run: out/minikube-linux-amd64 -p addons-153066 ip
addons_test.go:301: (dbg) Run: nslookup hello-john.test 192.168.39.189
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-153066 -n addons-153066
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-amd64 -p addons-153066 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-153066 logs -n 25: (1.164689177s)
helpers_test.go:261: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-292678 │ download-only-292678 │ jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │ 16 Dec 25 04:26 UTC │
│ start │ --download-only -p binary-mirror-194309 --alsologtostderr --binary-mirror http://127.0.0.1:44661 --driver=kvm2 --container-runtime=crio │ binary-mirror-194309 │ jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │ │
│ delete │ -p binary-mirror-194309 │ binary-mirror-194309 │ jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │ 16 Dec 25 04:26 UTC │
│ addons │ enable dashboard -p addons-153066 │ addons-153066 │ jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │ │
│ addons │ disable dashboard -p addons-153066 │ addons-153066 │ jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │ │
│ start │ -p addons-153066 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-153066 │ jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │ 16 Dec 25 04:28 UTC │
│ addons │ addons-153066 addons disable volcano --alsologtostderr -v=1 │ addons-153066 │ jenkins │ v1.37.0 │ 16 Dec 25 04:28 UTC │ 16 Dec 25 04:28 UTC │
│ addons │ addons-153066 addons disable gcp-auth --alsologtostderr -v=1 │ addons-153066 │ jenkins │ v1.37.0 │ 16 Dec 25 04:28 UTC │ 16 Dec 25 04:28 UTC │
│ addons │ enable headlamp -p addons-153066 --alsologtostderr -v=1 │ addons-153066 │ jenkins │ v1.37.0 │ 16 Dec 25 04:28 UTC │ 16 Dec 25 04:28 UTC │
│ addons │ addons-153066 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-153066 │ jenkins │ v1.37.0 │ 16 Dec 25 04:28 UTC │ 16 Dec 25 04:28 UTC │
│ addons │ addons-153066 addons disable yakd --alsologtostderr -v=1 │ addons-153066 │ jenkins │ v1.37.0 │ 16 Dec 25 04:28 UTC │ 16 Dec 25 04:28 UTC │
│ addons │ addons-153066 addons disable metrics-server --alsologtostderr -v=1 │ addons-153066 │ jenkins │ v1.37.0 │ 16 Dec 25 04:28 UTC │ 16 Dec 25 04:29 UTC │
│ addons │ addons-153066 addons disable headlamp --alsologtostderr -v=1 │ addons-153066 │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ 16 Dec 25 04:29 UTC │
│ ip │ addons-153066 ip │ addons-153066 │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ 16 Dec 25 04:29 UTC │
│ addons │ addons-153066 addons disable registry --alsologtostderr -v=1 │ addons-153066 │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ 16 Dec 25 04:29 UTC │
│ addons │ addons-153066 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-153066 │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ 16 Dec 25 04:29 UTC │
│ ssh │ addons-153066 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-153066 │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ │
│ ssh │ addons-153066 ssh cat /opt/local-path-provisioner/pvc-f15dac49-fd5a-496e-bac7-888f900e7fe3_default_test-pvc/file1 │ addons-153066 │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ 16 Dec 25 04:29 UTC │
│ addons │ addons-153066 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-153066 │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ 16 Dec 25 04:30 UTC │
│ addons │ addons-153066 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-153066 │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ 16 Dec 25 04:29 UTC │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-153066 │ addons-153066 │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ 16 Dec 25 04:29 UTC │
│ addons │ addons-153066 addons disable registry-creds --alsologtostderr -v=1 │ addons-153066 │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ 16 Dec 25 04:29 UTC │
│ addons │ addons-153066 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-153066 │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ 16 Dec 25 04:29 UTC │
│ addons │ addons-153066 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-153066 │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ 16 Dec 25 04:29 UTC │
│ ip │ addons-153066 ip │ addons-153066 │ jenkins │ v1.37.0 │ 16 Dec 25 04:31 UTC │ 16 Dec 25 04:31 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/16 04:26:12
Running on machine: ubuntu-20-agent-11
Binary: Built with gc go1.25.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1216 04:26:12.434032 9940 out.go:360] Setting OutFile to fd 1 ...
I1216 04:26:12.434245 9940 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:26:12.434253 9940 out.go:374] Setting ErrFile to fd 2...
I1216 04:26:12.434257 9940 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:26:12.434445 9940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
I1216 04:26:12.434978 9940 out.go:368] Setting JSON to false
I1216 04:26:12.435725 9940 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":514,"bootTime":1765858658,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1216 04:26:12.435797 9940 start.go:143] virtualization: kvm guest
I1216 04:26:12.437635 9940 out.go:179] * [addons-153066] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1216 04:26:12.438703 9940 notify.go:221] Checking for updates...
I1216 04:26:12.438813 9940 out.go:179] - MINIKUBE_LOCATION=22141
I1216 04:26:12.440332 9940 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1216 04:26:12.441519 9940 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
I1216 04:26:12.442608 9940 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
I1216 04:26:12.443640 9940 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1216 04:26:12.444763 9940 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1216 04:26:12.446095 9940 driver.go:422] Setting default libvirt URI to qemu:///system
I1216 04:26:12.475371 9940 out.go:179] * Using the kvm2 driver based on user configuration
I1216 04:26:12.476581 9940 start.go:309] selected driver: kvm2
I1216 04:26:12.476592 9940 start.go:927] validating driver "kvm2" against <nil>
I1216 04:26:12.476602 9940 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1216 04:26:12.477269 9940 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1216 04:26:12.477491 9940 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1216 04:26:12.477513 9940 cni.go:84] Creating CNI manager for ""
I1216 04:26:12.477586 9940 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1216 04:26:12.477596 9940 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1216 04:26:12.477632 9940 start.go:353] cluster config:
{Name:addons-153066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-153066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1216 04:26:12.477712 9940 iso.go:125] acquiring lock: {Name:mk32a15185e6e6998579c2a7c92376b162445713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1216 04:26:12.479042 9940 out.go:179] * Starting "addons-153066" primary control-plane node in "addons-153066" cluster
I1216 04:26:12.480141 9940 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1216 04:26:12.480164 9940 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
I1216 04:26:12.480170 9940 cache.go:65] Caching tarball of preloaded images
I1216 04:26:12.480241 9940 preload.go:238] Found /home/jenkins/minikube-integration/22141-5059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1216 04:26:12.480251 9940 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
I1216 04:26:12.480552 9940 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/config.json ...
I1216 04:26:12.480573 9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/config.json: {Name:mk46adce3dd880825a7aefcae063e7ae67cca56f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 04:26:12.480697 9940 start.go:360] acquireMachinesLock for addons-153066: {Name:mk62c9c2852efe4dee40756b90f6ebee1eabe893 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1216 04:26:12.480738 9940 start.go:364] duration metric: took 29.539µs to acquireMachinesLock for "addons-153066"
I1216 04:26:12.480754 9940 start.go:93] Provisioning new machine with config: &{Name:addons-153066 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-153066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1216 04:26:12.480821 9940 start.go:125] createHost starting for "" (driver="kvm2")
I1216 04:26:12.482422 9940 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1216 04:26:12.482565 9940 start.go:159] libmachine.API.Create for "addons-153066" (driver="kvm2")
I1216 04:26:12.482592 9940 client.go:173] LocalClient.Create starting
I1216 04:26:12.482665 9940 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem
I1216 04:26:12.641223 9940 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/cert.pem
I1216 04:26:12.724611 9940 main.go:143] libmachine: creating domain...
I1216 04:26:12.724632 9940 main.go:143] libmachine: creating network...
I1216 04:26:12.725967 9940 main.go:143] libmachine: found existing default network
I1216 04:26:12.726128 9940 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1216 04:26:12.726636 9940 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e54870}
I1216 04:26:12.726720 9940 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-153066</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1216 04:26:12.732885 9940 main.go:143] libmachine: creating private network mk-addons-153066 192.168.39.0/24...
I1216 04:26:12.795891 9940 main.go:143] libmachine: private network mk-addons-153066 192.168.39.0/24 created
I1216 04:26:12.796151 9940 main.go:143] libmachine: <network>
<name>mk-addons-153066</name>
<uuid>f6816a7a-c807-42a7-8e60-9e09a60af5c0</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:25:8a:31'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1216 04:26:12.796180 9940 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066 ...
I1216 04:26:12.796200 9940 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22141-5059/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso
I1216 04:26:12.796211 9940 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22141-5059/.minikube
I1216 04:26:12.796273 9940 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22141-5059/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22141-5059/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso...
I1216 04:26:13.064358 9940 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa...
I1216 04:26:13.107282 9940 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/addons-153066.rawdisk...
I1216 04:26:13.107323 9940 main.go:143] libmachine: Writing magic tar header
I1216 04:26:13.107344 9940 main.go:143] libmachine: Writing SSH key tar header
I1216 04:26:13.107419 9940 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066 ...
I1216 04:26:13.107479 9940 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066
I1216 04:26:13.107501 9940 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066 (perms=drwx------)
I1216 04:26:13.107510 9940 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22141-5059/.minikube/machines
I1216 04:26:13.107522 9940 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22141-5059/.minikube/machines (perms=drwxr-xr-x)
I1216 04:26:13.107534 9940 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22141-5059/.minikube
I1216 04:26:13.107542 9940 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22141-5059/.minikube (perms=drwxr-xr-x)
I1216 04:26:13.107552 9940 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22141-5059
I1216 04:26:13.107559 9940 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22141-5059 (perms=drwxrwxr-x)
I1216 04:26:13.107569 9940 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1216 04:26:13.107577 9940 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1216 04:26:13.107587 9940 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1216 04:26:13.107594 9940 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1216 04:26:13.107603 9940 main.go:143] libmachine: checking permissions on dir: /home
I1216 04:26:13.107615 9940 main.go:143] libmachine: skipping /home - not owner
I1216 04:26:13.107621 9940 main.go:143] libmachine: defining domain...
I1216 04:26:13.108754 9940 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-153066</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/addons-153066.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-153066'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1216 04:26:13.115987 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:89:32:e6 in network default
I1216 04:26:13.116527 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:13.116544 9940 main.go:143] libmachine: starting domain...
I1216 04:26:13.116548 9940 main.go:143] libmachine: ensuring networks are active...
I1216 04:26:13.117137 9940 main.go:143] libmachine: Ensuring network default is active
I1216 04:26:13.117465 9940 main.go:143] libmachine: Ensuring network mk-addons-153066 is active
I1216 04:26:13.117967 9940 main.go:143] libmachine: getting domain XML...
I1216 04:26:13.118785 9940 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-153066</name>
<uuid>b9b65814-80b7-4e0a-92c8-4d21ede24ac3</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/addons-153066.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:c6:57:6e'/>
<source network='mk-addons-153066'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:89:32:e6'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1216 04:26:14.371510 9940 main.go:143] libmachine: waiting for domain to start...
I1216 04:26:14.372591 9940 main.go:143] libmachine: domain is now running
I1216 04:26:14.372606 9940 main.go:143] libmachine: waiting for IP...
I1216 04:26:14.373230 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:14.373797 9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
I1216 04:26:14.373812 9940 main.go:143] libmachine: trying to list again with source=arp
I1216 04:26:14.374048 9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
I1216 04:26:14.374084 9940 retry.go:31] will retry after 206.182677ms: waiting for domain to come up
I1216 04:26:14.581277 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:14.581761 9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
I1216 04:26:14.581788 9940 main.go:143] libmachine: trying to list again with source=arp
I1216 04:26:14.582071 9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
I1216 04:26:14.582100 9940 retry.go:31] will retry after 293.803735ms: waiting for domain to come up
I1216 04:26:14.877483 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:14.877990 9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
I1216 04:26:14.878003 9940 main.go:143] libmachine: trying to list again with source=arp
I1216 04:26:14.878242 9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
I1216 04:26:14.878273 9940 retry.go:31] will retry after 366.70569ms: waiting for domain to come up
I1216 04:26:15.246797 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:15.247378 9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
I1216 04:26:15.247393 9940 main.go:143] libmachine: trying to list again with source=arp
I1216 04:26:15.247824 9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
I1216 04:26:15.247864 9940 retry.go:31] will retry after 388.153383ms: waiting for domain to come up
I1216 04:26:15.637394 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:15.637888 9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
I1216 04:26:15.637906 9940 main.go:143] libmachine: trying to list again with source=arp
I1216 04:26:15.638219 9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
I1216 04:26:15.638257 9940 retry.go:31] will retry after 698.046366ms: waiting for domain to come up
I1216 04:26:16.338095 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:16.338614 9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
I1216 04:26:16.338633 9940 main.go:143] libmachine: trying to list again with source=arp
I1216 04:26:16.338897 9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
I1216 04:26:16.338929 9940 retry.go:31] will retry after 725.381934ms: waiting for domain to come up
I1216 04:26:17.065883 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:17.066447 9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
I1216 04:26:17.066465 9940 main.go:143] libmachine: trying to list again with source=arp
I1216 04:26:17.066802 9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
I1216 04:26:17.066837 9940 retry.go:31] will retry after 1.128973689s: waiting for domain to come up
I1216 04:26:18.197211 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:18.197736 9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
I1216 04:26:18.197751 9940 main.go:143] libmachine: trying to list again with source=arp
I1216 04:26:18.198068 9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
I1216 04:26:18.198106 9940 retry.go:31] will retry after 1.258194359s: waiting for domain to come up
I1216 04:26:19.458700 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:19.459255 9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
I1216 04:26:19.459282 9940 main.go:143] libmachine: trying to list again with source=arp
I1216 04:26:19.459610 9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
I1216 04:26:19.459650 9940 retry.go:31] will retry after 1.218744169s: waiting for domain to come up
I1216 04:26:20.679886 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:20.680439 9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
I1216 04:26:20.680451 9940 main.go:143] libmachine: trying to list again with source=arp
I1216 04:26:20.680764 9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
I1216 04:26:20.680810 9940 retry.go:31] will retry after 1.442537405s: waiting for domain to come up
I1216 04:26:22.125650 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:22.126346 9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
I1216 04:26:22.126370 9940 main.go:143] libmachine: trying to list again with source=arp
I1216 04:26:22.126726 9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
I1216 04:26:22.126765 9940 retry.go:31] will retry after 2.564829172s: waiting for domain to come up
I1216 04:26:24.694377 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:24.694948 9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
I1216 04:26:24.694963 9940 main.go:143] libmachine: trying to list again with source=arp
I1216 04:26:24.695211 9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
I1216 04:26:24.695253 9940 retry.go:31] will retry after 2.37531298s: waiting for domain to come up
I1216 04:26:27.072479 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:27.072976 9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
I1216 04:26:27.072989 9940 main.go:143] libmachine: trying to list again with source=arp
I1216 04:26:27.073211 9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
I1216 04:26:27.073242 9940 retry.go:31] will retry after 3.46923009s: waiting for domain to come up
I1216 04:26:30.546096 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:30.546585 9940 main.go:143] libmachine: domain addons-153066 has current primary IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:30.546598 9940 main.go:143] libmachine: found domain IP: 192.168.39.189
I1216 04:26:30.546605 9940 main.go:143] libmachine: reserving static IP address...
I1216 04:26:30.546945 9940 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-153066", mac: "52:54:00:c6:57:6e", ip: "192.168.39.189"} in network mk-addons-153066
I1216 04:26:30.728384 9940 main.go:143] libmachine: reserved static IP address 192.168.39.189 for domain addons-153066
I1216 04:26:30.728410 9940 main.go:143] libmachine: waiting for SSH...
I1216 04:26:30.728418 9940 main.go:143] libmachine: Getting to WaitForSSH function...
I1216 04:26:30.730920 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:30.731291 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:30.731310 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:30.731554 9940 main.go:143] libmachine: Using SSH client type: native
I1216 04:26:30.731792 9940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.189 22 <nil> <nil>}
I1216 04:26:30.731803 9940 main.go:143] libmachine: About to run SSH command:
exit 0
I1216 04:26:30.838568 9940 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1216 04:26:30.838924 9940 main.go:143] libmachine: domain creation complete
I1216 04:26:30.840487 9940 machine.go:94] provisionDockerMachine start ...
I1216 04:26:30.842970 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:30.843298 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:30.843316 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:30.843453 9940 main.go:143] libmachine: Using SSH client type: native
I1216 04:26:30.843716 9940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.189 22 <nil> <nil>}
I1216 04:26:30.843732 9940 main.go:143] libmachine: About to run SSH command:
hostname
I1216 04:26:30.947446 9940 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1216 04:26:30.947479 9940 buildroot.go:166] provisioning hostname "addons-153066"
I1216 04:26:30.950700 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:30.951140 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:30.951164 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:30.951359 9940 main.go:143] libmachine: Using SSH client type: native
I1216 04:26:30.951608 9940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.189 22 <nil> <nil>}
I1216 04:26:30.951622 9940 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-153066 && echo "addons-153066" | sudo tee /etc/hostname
I1216 04:26:31.074594 9940 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-153066
I1216 04:26:31.077336 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:31.077784 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:31.077811 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:31.078013 9940 main.go:143] libmachine: Using SSH client type: native
I1216 04:26:31.078246 9940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.189 22 <nil> <nil>}
I1216 04:26:31.078263 9940 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-153066' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-153066/g' /etc/hosts;
else
echo '127.0.1.1 addons-153066' | sudo tee -a /etc/hosts;
fi
fi
I1216 04:26:31.203843 9940 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1216 04:26:31.203873 9940 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5059/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5059/.minikube}
I1216 04:26:31.203908 9940 buildroot.go:174] setting up certificates
I1216 04:26:31.203919 9940 provision.go:84] configureAuth start
I1216 04:26:31.206493 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:31.206859 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:31.206890 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:31.209067 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:31.209362 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:31.209384 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:31.209499 9940 provision.go:143] copyHostCerts
I1216 04:26:31.209558 9940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5059/.minikube/key.pem (1675 bytes)
I1216 04:26:31.209666 9940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5059/.minikube/ca.pem (1082 bytes)
I1216 04:26:31.209751 9940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5059/.minikube/cert.pem (1123 bytes)
I1216 04:26:31.209825 9940 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca-key.pem org=jenkins.addons-153066 san=[127.0.0.1 192.168.39.189 addons-153066 localhost minikube]
I1216 04:26:31.303447 9940 provision.go:177] copyRemoteCerts
I1216 04:26:31.303513 9940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1216 04:26:31.305998 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:31.307261 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:31.307288 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:31.307477 9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
I1216 04:26:31.389937 9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1216 04:26:31.418451 9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1216 04:26:31.446584 9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1216 04:26:31.475317 9940 provision.go:87] duration metric: took 271.351496ms to configureAuth
I1216 04:26:31.475350 9940 buildroot.go:189] setting minikube options for container-runtime
I1216 04:26:31.475522 9940 config.go:182] Loaded profile config "addons-153066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:26:31.478363 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:31.478758 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:31.478807 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:31.478985 9940 main.go:143] libmachine: Using SSH client type: native
I1216 04:26:31.479213 9940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.189 22 <nil> <nil>}
I1216 04:26:31.479236 9940 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1216 04:26:31.717516 9940 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1216 04:26:31.717545 9940 machine.go:97] duration metric: took 877.038533ms to provisionDockerMachine
I1216 04:26:31.717559 9940 client.go:176] duration metric: took 19.234961055s to LocalClient.Create
I1216 04:26:31.717578 9940 start.go:167] duration metric: took 19.23501183s to libmachine.API.Create "addons-153066"
I1216 04:26:31.717588 9940 start.go:293] postStartSetup for "addons-153066" (driver="kvm2")
I1216 04:26:31.717600 9940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1216 04:26:31.717656 9940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1216 04:26:31.720287 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:31.720673 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:31.720696 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:31.720857 9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
I1216 04:26:31.803365 9940 ssh_runner.go:195] Run: cat /etc/os-release
I1216 04:26:31.808013 9940 info.go:137] Remote host: Buildroot 2025.02
I1216 04:26:31.808039 9940 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5059/.minikube/addons for local assets ...
I1216 04:26:31.808116 9940 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5059/.minikube/files for local assets ...
I1216 04:26:31.808139 9940 start.go:296] duration metric: took 90.54538ms for postStartSetup
I1216 04:26:31.821759 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:31.822167 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:31.822190 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:31.822446 9940 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/config.json ...
I1216 04:26:31.828927 9940 start.go:128] duration metric: took 19.348094211s to createHost
I1216 04:26:31.831328 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:31.831725 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:31.831753 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:31.831965 9940 main.go:143] libmachine: Using SSH client type: native
I1216 04:26:31.832227 9940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.189 22 <nil> <nil>}
I1216 04:26:31.832244 9940 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1216 04:26:31.937324 9940 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765859191.900097585
I1216 04:26:31.937350 9940 fix.go:216] guest clock: 1765859191.900097585
I1216 04:26:31.937360 9940 fix.go:229] Guest: 2025-12-16 04:26:31.900097585 +0000 UTC Remote: 2025-12-16 04:26:31.82894645 +0000 UTC m=+19.439554359 (delta=71.151135ms)
I1216 04:26:31.937391 9940 fix.go:200] guest clock delta is within tolerance: 71.151135ms
I1216 04:26:31.937396 9940 start.go:83] releasing machines lock for "addons-153066", held for 19.456649812s
I1216 04:26:31.939797 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:31.940168 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:31.940188 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:31.940665 9940 ssh_runner.go:195] Run: cat /version.json
I1216 04:26:31.940740 9940 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1216 04:26:31.943751 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:31.944032 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:31.944187 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:31.944215 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:31.944349 9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
I1216 04:26:31.944526 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:31.944561 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:31.944724 9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
I1216 04:26:32.029876 9940 ssh_runner.go:195] Run: systemctl --version
I1216 04:26:32.059197 9940 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1216 04:26:32.712109 9940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1216 04:26:32.718819 9940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1216 04:26:32.718872 9940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1216 04:26:32.742804 9940 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1216 04:26:32.742827 9940 start.go:496] detecting cgroup driver to use...
I1216 04:26:32.742896 9940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1216 04:26:32.764024 9940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1216 04:26:32.780817 9940 docker.go:218] disabling cri-docker service (if available) ...
I1216 04:26:32.780871 9940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1216 04:26:32.797826 9940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1216 04:26:32.813247 9940 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1216 04:26:32.957205 9940 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1216 04:26:33.165397 9940 docker.go:234] disabling docker service ...
I1216 04:26:33.165471 9940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1216 04:26:33.183025 9940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1216 04:26:33.198643 9940 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1216 04:26:33.354740 9940 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1216 04:26:33.498644 9940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1216 04:26:33.514383 9940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1216 04:26:33.541164 9940 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1216 04:26:33.541220 9940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1216 04:26:33.554012 9940 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1216 04:26:33.554070 9940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1216 04:26:33.566596 9940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1216 04:26:33.578561 9940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1216 04:26:33.590294 9940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1216 04:26:33.602835 9940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1216 04:26:33.615271 9940 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1216 04:26:33.634945 9940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1216 04:26:33.646443 9940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1216 04:26:33.656320 9940 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1216 04:26:33.656362 9940 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1216 04:26:33.676450 9940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1216 04:26:33.688552 9940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1216 04:26:33.832431 9940 ssh_runner.go:195] Run: sudo systemctl restart crio
I1216 04:26:33.939250 9940 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1216 04:26:33.939345 9940 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1216 04:26:33.944839 9940 start.go:564] Will wait 60s for crictl version
I1216 04:26:33.944920 9940 ssh_runner.go:195] Run: which crictl
I1216 04:26:33.948980 9940 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1216 04:26:33.985521 9940 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1216 04:26:33.985672 9940 ssh_runner.go:195] Run: crio --version
I1216 04:26:34.014844 9940 ssh_runner.go:195] Run: crio --version
I1216 04:26:34.044955 9940 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
I1216 04:26:34.048412 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:34.048743 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:34.048786 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:34.048969 9940 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1216 04:26:34.053408 9940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1216 04:26:34.069056 9940 kubeadm.go:884] updating cluster {Name:addons-153066 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-153066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1216 04:26:34.069152 9940 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1216 04:26:34.069206 9940 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 04:26:34.104023 9940 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
I1216 04:26:34.104084 9940 ssh_runner.go:195] Run: which lz4
I1216 04:26:34.108372 9940 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1216 04:26:34.112962 9940 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1216 04:26:34.112989 9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
I1216 04:26:35.326872 9940 crio.go:462] duration metric: took 1.218523885s to copy over tarball
I1216 04:26:35.326941 9940 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1216 04:26:36.775076 9940 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.44810628s)
I1216 04:26:36.775102 9940 crio.go:469] duration metric: took 1.44820094s to extract the tarball
I1216 04:26:36.775112 9940 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1216 04:26:36.810756 9940 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 04:26:36.851866 9940 crio.go:514] all images are preloaded for cri-o runtime.
I1216 04:26:36.851886 9940 cache_images.go:86] Images are preloaded, skipping loading
I1216 04:26:36.851894 9940 kubeadm.go:935] updating node { 192.168.39.189 8443 v1.34.2 crio true true} ...
I1216 04:26:36.851987 9940 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-153066 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.189
[Install]
config:
{KubernetesVersion:v1.34.2 ClusterName:addons-153066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1216 04:26:36.852063 9940 ssh_runner.go:195] Run: crio config
I1216 04:26:36.897204 9940 cni.go:84] Creating CNI manager for ""
I1216 04:26:36.897229 9940 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1216 04:26:36.897246 9940 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1216 04:26:36.897272 9940 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.189 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-153066 NodeName:addons-153066 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.189"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.189 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1216 04:26:36.897418 9940 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.189
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-153066"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.189"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.189"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1216 04:26:36.897490 9940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
I1216 04:26:36.909742 9940 binaries.go:51] Found k8s binaries, skipping transfer
I1216 04:26:36.909822 9940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1216 04:26:36.921184 9940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I1216 04:26:36.941873 9940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1216 04:26:36.962574 9940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
I1216 04:26:36.983619 9940 ssh_runner.go:195] Run: grep 192.168.39.189 control-plane.minikube.internal$ /etc/hosts
I1216 04:26:36.988196 9940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.189 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1216 04:26:37.002842 9940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1216 04:26:37.143893 9940 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1216 04:26:37.163929 9940 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066 for IP: 192.168.39.189
I1216 04:26:37.163958 9940 certs.go:195] generating shared ca certs ...
I1216 04:26:37.163980 9940 certs.go:227] acquiring lock for ca certs: {Name:mkeb038c86653b42975db55bc13142d606c3d109 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 04:26:37.164172 9940 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5059/.minikube/ca.key
I1216 04:26:37.325901 9940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt ...
I1216 04:26:37.325930 9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt: {Name:mkb298cbd6f2a662a2ef54c0f206ce67489c4c74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 04:26:37.326098 9940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5059/.minikube/ca.key ...
I1216 04:26:37.326109 9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/ca.key: {Name:mk2ea7454f689a63b0191fe48cc639ae4d6c694d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 04:26:37.326184 9940 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.key
I1216 04:26:37.356677 9940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.crt ...
I1216 04:26:37.356702 9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.crt: {Name:mk9837582ee8f37268e0fda446ec14b506c621b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 04:26:37.356838 9940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.key ...
I1216 04:26:37.356850 9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.key: {Name:mk5a6dbe24498aa7e3157b178a702ef9442795b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 04:26:37.356914 9940 certs.go:257] generating profile certs ...
I1216 04:26:37.356981 9940 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.key
I1216 04:26:37.356997 9940 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt with IP's: []
I1216 04:26:37.589063 9940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt ...
I1216 04:26:37.589091 9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: {Name:mkd597a69d61b484fd3d6ce7897d18f14f48dc61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 04:26:37.589285 9940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.key ...
I1216 04:26:37.589299 9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.key: {Name:mke91f5e16b918a39d7606ef726c59a7541b4091 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 04:26:37.589889 9940 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.key.e0ee3c2c
I1216 04:26:37.589913 9940 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.crt.e0ee3c2c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.189]
I1216 04:26:37.636033 9940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.crt.e0ee3c2c ...
I1216 04:26:37.636061 9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.crt.e0ee3c2c: {Name:mk10338343b5e41315c0439a0a3bc6d65d053dbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 04:26:37.636242 9940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.key.e0ee3c2c ...
I1216 04:26:37.636257 9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.key.e0ee3c2c: {Name:mk0aff05bc76f89ebebf42b652565025805a9bf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 04:26:37.636372 9940 certs.go:382] copying /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.crt.e0ee3c2c -> /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.crt
I1216 04:26:37.636449 9940 certs.go:386] copying /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.key.e0ee3c2c -> /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.key
I1216 04:26:37.636497 9940 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/proxy-client.key
I1216 04:26:37.636514 9940 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/proxy-client.crt with IP's: []
I1216 04:26:37.654727 9940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/proxy-client.crt ...
I1216 04:26:37.654747 9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/proxy-client.crt: {Name:mka0ad637d45ce84d377e79efcf58bb26360f7f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 04:26:37.654918 9940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/proxy-client.key ...
I1216 04:26:37.654934 9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/proxy-client.key: {Name:mkcd7cec00b5145ff289ee427ce1adf9fa8341c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 04:26:37.655138 9940 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca-key.pem (1675 bytes)
I1216 04:26:37.655173 9940 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem (1082 bytes)
I1216 04:26:37.655200 9940 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/cert.pem (1123 bytes)
I1216 04:26:37.655224 9940 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/key.pem (1675 bytes)
I1216 04:26:37.655756 9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1216 04:26:37.686969 9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1216 04:26:37.716845 9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1216 04:26:37.746408 9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1216 04:26:37.775533 9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1216 04:26:37.804706 9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1216 04:26:37.834021 9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1216 04:26:37.871887 9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1216 04:26:37.905376 9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1216 04:26:37.936298 9940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1216 04:26:37.956342 9940 ssh_runner.go:195] Run: openssl version
I1216 04:26:37.962738 9940 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1216 04:26:37.973506 9940 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1216 04:26:37.984247 9940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1216 04:26:37.989438 9940 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:26 /usr/share/ca-certificates/minikubeCA.pem
I1216 04:26:37.989480 9940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1216 04:26:37.996227 9940 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1216 04:26:38.006746 9940 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1216 04:26:38.017730 9940 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1216 04:26:38.022446 9940 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1216 04:26:38.022491 9940 kubeadm.go:401] StartCluster: {Name:addons-153066 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-153066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1216 04:26:38.022571 9940 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1216 04:26:38.022633 9940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1216 04:26:38.055305 9940 cri.go:89] found id: ""
I1216 04:26:38.055391 9940 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1216 04:26:38.067112 9940 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1216 04:26:38.078619 9940 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1216 04:26:38.089225 9940 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1216 04:26:38.089241 9940 kubeadm.go:158] found existing configuration files:
I1216 04:26:38.089283 9940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1216 04:26:38.099396 9940 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1216 04:26:38.099464 9940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1216 04:26:38.110843 9940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1216 04:26:38.121147 9940 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1216 04:26:38.121182 9940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1216 04:26:38.131451 9940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1216 04:26:38.141465 9940 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1216 04:26:38.141531 9940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1216 04:26:38.152415 9940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1216 04:26:38.162496 9940 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1216 04:26:38.162548 9940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1216 04:26:38.172971 9940 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1216 04:26:38.318478 9940 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1216 04:26:51.244458 9940 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
I1216 04:26:51.244534 9940 kubeadm.go:319] [preflight] Running pre-flight checks
I1216 04:26:51.244637 9940 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1216 04:26:51.244788 9940 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1216 04:26:51.244903 9940 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1216 04:26:51.244955 9940 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1216 04:26:51.247059 9940 out.go:252] - Generating certificates and keys ...
I1216 04:26:51.247138 9940 kubeadm.go:319] [certs] Using existing ca certificate authority
I1216 04:26:51.247226 9940 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1216 04:26:51.247309 9940 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1216 04:26:51.247358 9940 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1216 04:26:51.247433 9940 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1216 04:26:51.247497 9940 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1216 04:26:51.247548 9940 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1216 04:26:51.247640 9940 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-153066 localhost] and IPs [192.168.39.189 127.0.0.1 ::1]
I1216 04:26:51.247716 9940 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1216 04:26:51.247886 9940 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-153066 localhost] and IPs [192.168.39.189 127.0.0.1 ::1]
I1216 04:26:51.247980 9940 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1216 04:26:51.248069 9940 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1216 04:26:51.248141 9940 kubeadm.go:319] [certs] Generating "sa" key and public key
I1216 04:26:51.248193 9940 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1216 04:26:51.248235 9940 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1216 04:26:51.248295 9940 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1216 04:26:51.248337 9940 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1216 04:26:51.248410 9940 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1216 04:26:51.248478 9940 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1216 04:26:51.248552 9940 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1216 04:26:51.248665 9940 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1216 04:26:51.250907 9940 out.go:252] - Booting up control plane ...
I1216 04:26:51.251011 9940 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1216 04:26:51.251119 9940 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1216 04:26:51.251217 9940 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1216 04:26:51.251381 9940 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1216 04:26:51.251504 9940 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1216 04:26:51.251666 9940 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1216 04:26:51.251744 9940 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1216 04:26:51.251791 9940 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1216 04:26:51.251910 9940 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1216 04:26:51.252023 9940 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1216 04:26:51.252116 9940 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001332118s
I1216 04:26:51.252253 9940 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1216 04:26:51.252356 9940 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.189:8443/livez
I1216 04:26:51.252467 9940 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1216 04:26:51.252537 9940 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1216 04:26:51.252596 9940 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.035648545s
I1216 04:26:51.252678 9940 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.732859848s
I1216 04:26:51.252766 9940 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502235974s
I1216 04:26:51.252902 9940 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1216 04:26:51.253048 9940 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1216 04:26:51.253137 9940 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1216 04:26:51.253347 9940 kubeadm.go:319] [mark-control-plane] Marking the node addons-153066 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1216 04:26:51.253417 9940 kubeadm.go:319] [bootstrap-token] Using token: s9emtg.znl4zc5yufahvhxg
I1216 04:26:51.255003 9940 out.go:252] - Configuring RBAC rules ...
I1216 04:26:51.255100 9940 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1216 04:26:51.255186 9940 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1216 04:26:51.255345 9940 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1216 04:26:51.255505 9940 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1216 04:26:51.255628 9940 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1216 04:26:51.255725 9940 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1216 04:26:51.255862 9940 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1216 04:26:51.255928 9940 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1216 04:26:51.256003 9940 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1216 04:26:51.256011 9940 kubeadm.go:319]
I1216 04:26:51.256103 9940 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1216 04:26:51.256117 9940 kubeadm.go:319]
I1216 04:26:51.256212 9940 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1216 04:26:51.256223 9940 kubeadm.go:319]
I1216 04:26:51.256262 9940 kubeadm.go:319] mkdir -p $HOME/.kube
I1216 04:26:51.256419 9940 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1216 04:26:51.256491 9940 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1216 04:26:51.256501 9940 kubeadm.go:319]
I1216 04:26:51.256572 9940 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1216 04:26:51.256581 9940 kubeadm.go:319]
I1216 04:26:51.256648 9940 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1216 04:26:51.256656 9940 kubeadm.go:319]
I1216 04:26:51.256729 9940 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1216 04:26:51.256853 9940 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1216 04:26:51.256945 9940 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1216 04:26:51.256956 9940 kubeadm.go:319]
I1216 04:26:51.257059 9940 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1216 04:26:51.257163 9940 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1216 04:26:51.257171 9940 kubeadm.go:319]
I1216 04:26:51.257294 9940 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token s9emtg.znl4zc5yufahvhxg \
I1216 04:26:51.257387 9940 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:6b10d8aa5d34951ef0c68d93c25038a5fa50fdf938787206894299e135264d81 \
I1216 04:26:51.257409 9940 kubeadm.go:319] --control-plane
I1216 04:26:51.257415 9940 kubeadm.go:319]
I1216 04:26:51.257508 9940 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1216 04:26:51.257524 9940 kubeadm.go:319]
I1216 04:26:51.257630 9940 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token s9emtg.znl4zc5yufahvhxg \
I1216 04:26:51.257781 9940 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:6b10d8aa5d34951ef0c68d93c25038a5fa50fdf938787206894299e135264d81
I1216 04:26:51.257797 9940 cni.go:84] Creating CNI manager for ""
I1216 04:26:51.257804 9940 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1216 04:26:51.259690 9940 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1216 04:26:51.260727 9940 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1216 04:26:51.274427 9940 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1216 04:26:51.296193 9940 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1216 04:26:51.296294 9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 04:26:51.296325 9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-153066 minikube.k8s.io/updated_at=2025_12_16T04_26_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=addons-153066 minikube.k8s.io/primary=true
I1216 04:26:51.452890 9940 ops.go:34] apiserver oom_adj: -16
I1216 04:26:51.452901 9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 04:26:51.953823 9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 04:26:52.453292 9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 04:26:52.952971 9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 04:26:53.453237 9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 04:26:53.953981 9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 04:26:54.453616 9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 04:26:54.954040 9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 04:26:55.453480 9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 04:26:55.953936 9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 04:26:56.070327 9940 kubeadm.go:1114] duration metric: took 4.774098956s to wait for elevateKubeSystemPrivileges
I1216 04:26:56.070375 9940 kubeadm.go:403] duration metric: took 18.047885467s to StartCluster
I1216 04:26:56.070397 9940 settings.go:142] acquiring lock: {Name:mk934ce4e0f52c59044080dacae6bea8d1937fab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 04:26:56.070571 9940 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22141-5059/kubeconfig
I1216 04:26:56.071174 9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/kubeconfig: {Name:mk2e0aa2a9ecd47e0407b52e183f6fd294eb595a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 04:26:56.071409 9940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1216 04:26:56.071436 9940 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1216 04:26:56.071639 9940 config.go:182] Loaded profile config "addons-153066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:26:56.071558 9940 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1216 04:26:56.071725 9940 addons.go:70] Setting default-storageclass=true in profile "addons-153066"
I1216 04:26:56.071742 9940 addons.go:70] Setting gcp-auth=true in profile "addons-153066"
I1216 04:26:56.071750 9940 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-153066"
I1216 04:26:56.071747 9940 addons.go:70] Setting cloud-spanner=true in profile "addons-153066"
I1216 04:26:56.071768 9940 addons.go:70] Setting ingress=true in profile "addons-153066"
I1216 04:26:56.071798 9940 addons.go:239] Setting addon ingress=true in "addons-153066"
I1216 04:26:56.071803 9940 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-153066"
I1216 04:26:56.071761 9940 mustload.go:66] Loading cluster: addons-153066
I1216 04:26:56.071824 9940 addons.go:70] Setting ingress-dns=true in profile "addons-153066"
I1216 04:26:56.071829 9940 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-153066"
I1216 04:26:56.071837 9940 addons.go:239] Setting addon ingress-dns=true in "addons-153066"
I1216 04:26:56.071841 9940 host.go:66] Checking if "addons-153066" exists ...
I1216 04:26:56.071871 9940 host.go:66] Checking if "addons-153066" exists ...
I1216 04:26:56.071890 9940 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-153066"
I1216 04:26:56.071905 9940 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-153066"
I1216 04:26:56.071997 9940 config.go:182] Loaded profile config "addons-153066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:26:56.072020 9940 addons.go:70] Setting metrics-server=true in profile "addons-153066"
I1216 04:26:56.072039 9940 addons.go:239] Setting addon metrics-server=true in "addons-153066"
I1216 04:26:56.072067 9940 host.go:66] Checking if "addons-153066" exists ...
I1216 04:26:56.072344 9940 addons.go:70] Setting inspektor-gadget=true in profile "addons-153066"
I1216 04:26:56.072364 9940 addons.go:239] Setting addon inspektor-gadget=true in "addons-153066"
I1216 04:26:56.072405 9940 host.go:66] Checking if "addons-153066" exists ...
I1216 04:26:56.072805 9940 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-153066"
I1216 04:26:56.072828 9940 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-153066"
I1216 04:26:56.072851 9940 host.go:66] Checking if "addons-153066" exists ...
I1216 04:26:56.072874 9940 addons.go:70] Setting registry-creds=true in profile "addons-153066"
I1216 04:26:56.072893 9940 addons.go:239] Setting addon registry-creds=true in "addons-153066"
I1216 04:26:56.072921 9940 host.go:66] Checking if "addons-153066" exists ...
I1216 04:26:56.071730 9940 addons.go:70] Setting yakd=true in profile "addons-153066"
I1216 04:26:56.073103 9940 addons.go:239] Setting addon yakd=true in "addons-153066"
I1216 04:26:56.073129 9940 host.go:66] Checking if "addons-153066" exists ...
I1216 04:26:56.073156 9940 addons.go:70] Setting volcano=true in profile "addons-153066"
I1216 04:26:56.073172 9940 addons.go:239] Setting addon volcano=true in "addons-153066"
I1216 04:26:56.073194 9940 host.go:66] Checking if "addons-153066" exists ...
I1216 04:26:56.071811 9940 addons.go:239] Setting addon cloud-spanner=true in "addons-153066"
I1216 04:26:56.073639 9940 host.go:66] Checking if "addons-153066" exists ...
I1216 04:26:56.071805 9940 addons.go:70] Setting registry=true in profile "addons-153066"
I1216 04:26:56.073708 9940 addons.go:239] Setting addon registry=true in "addons-153066"
I1216 04:26:56.073732 9940 host.go:66] Checking if "addons-153066" exists ...
I1216 04:26:56.073871 9940 addons.go:70] Setting storage-provisioner=true in profile "addons-153066"
I1216 04:26:56.073896 9940 addons.go:239] Setting addon storage-provisioner=true in "addons-153066"
I1216 04:26:56.073922 9940 host.go:66] Checking if "addons-153066" exists ...
I1216 04:26:56.071874 9940 host.go:66] Checking if "addons-153066" exists ...
I1216 04:26:56.074125 9940 addons.go:70] Setting volumesnapshots=true in profile "addons-153066"
I1216 04:26:56.074129 9940 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-153066"
I1216 04:26:56.074142 9940 addons.go:239] Setting addon volumesnapshots=true in "addons-153066"
I1216 04:26:56.074171 9940 host.go:66] Checking if "addons-153066" exists ...
I1216 04:26:56.074206 9940 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-153066"
I1216 04:26:56.074232 9940 host.go:66] Checking if "addons-153066" exists ...
I1216 04:26:56.074550 9940 out.go:179] * Verifying Kubernetes components...
I1216 04:26:56.076179 9940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1216 04:26:56.079131 9940 host.go:66] Checking if "addons-153066" exists ...
I1216 04:26:56.080404 9940 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-153066"
I1216 04:26:56.080436 9940 host.go:66] Checking if "addons-153066" exists ...
I1216 04:26:56.080408 9940 addons.go:239] Setting addon default-storageclass=true in "addons-153066"
I1216 04:26:56.080521 9940 host.go:66] Checking if "addons-153066" exists ...
I1216 04:26:56.081518 9940 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1216 04:26:56.081518 9940 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1216 04:26:56.081523 9940 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
I1216 04:26:56.082397 9940 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
W1216 04:26:56.082694 9940 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1216 04:26:56.083238 9940 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1216 04:26:56.083253 9940 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1216 04:26:56.083265 9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1216 04:26:56.083255 9940 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1216 04:26:56.083949 9940 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1216 04:26:56.084014 9940 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1216 04:26:56.084389 9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1216 04:26:56.084789 9940 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
I1216 04:26:56.084829 9940 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1216 04:26:56.084803 9940 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
I1216 04:26:56.084806 9940 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1216 04:26:56.084792 9940 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1216 04:26:56.084873 9940 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1216 04:26:56.085532 9940 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1216 04:26:56.086073 9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1216 04:26:56.085516 9940 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1216 04:26:56.085558 9940 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1216 04:26:56.085530 9940 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1216 04:26:56.086013 9940 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1216 04:26:56.086921 9940 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1216 04:26:56.086242 9940 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1216 04:26:56.087375 9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1216 04:26:56.087037 9940 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1216 04:26:56.087648 9940 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1216 04:26:56.087044 9940 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1216 04:26:56.087766 9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1216 04:26:56.087907 9940 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1216 04:26:56.087919 9940 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1216 04:26:56.087924 9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1216 04:26:56.087931 9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1216 04:26:56.088668 9940 out.go:179] - Using image docker.io/registry:3.0.0
I1216 04:26:56.088675 9940 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1216 04:26:56.088675 9940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1216 04:26:56.088751 9940 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1216 04:26:56.089646 9940 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1216 04:26:56.090632 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.090728 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.090787 9940 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1216 04:26:56.090836 9940 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1216 04:26:56.091245 9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1216 04:26:56.091568 9940 out.go:179] - Using image docker.io/busybox:stable
I1216 04:26:56.091703 9940 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1216 04:26:56.091718 9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1216 04:26:56.091973 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.092405 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:56.092438 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.092524 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:56.092583 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.093209 9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
I1216 04:26:56.093209 9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
I1216 04:26:56.093306 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:56.093365 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.094553 9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
I1216 04:26:56.094802 9940 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1216 04:26:56.094817 9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1216 04:26:56.095912 9940 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1216 04:26:56.095949 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.097226 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:56.097263 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.098132 9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
I1216 04:26:56.098337 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.098414 9940 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1216 04:26:56.098439 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.099118 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.099452 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:56.099484 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.099594 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:56.099630 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.099501 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.099751 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.100143 9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
I1216 04:26:56.100455 9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
I1216 04:26:56.100622 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.100713 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:56.100792 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.100852 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.100964 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:56.100980 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:56.100995 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.101020 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.101283 9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
I1216 04:26:56.101387 9940 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1216 04:26:56.101497 9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
I1216 04:26:56.101536 9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
I1216 04:26:56.101902 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.101975 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:56.102003 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.102015 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:56.102037 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.102372 9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
I1216 04:26:56.102408 9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
I1216 04:26:56.102646 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.102909 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:56.102943 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.103063 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:56.103090 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.103125 9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
I1216 04:26:56.103297 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.103396 9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
I1216 04:26:56.103839 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:56.103861 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.104026 9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
I1216 04:26:56.105060 9940 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1216 04:26:56.106374 9940 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1216 04:26:56.107612 9940 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1216 04:26:56.108716 9940 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1216 04:26:56.108750 9940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1216 04:26:56.111714 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.112176 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:26:56.112210 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:26:56.112376 9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
W1216 04:26:56.423168 9940 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57314->192.168.39.189:22: read: connection reset by peer
I1216 04:26:56.423209 9940 retry.go:31] will retry after 132.583334ms: ssh: handshake failed: read tcp 192.168.39.1:57314->192.168.39.189:22: read: connection reset by peer
I1216 04:26:56.744464 9940 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1216 04:26:56.744490 9940 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1216 04:26:56.904137 9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1216 04:26:56.907534 9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1216 04:26:56.935654 9940 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1216 04:26:56.936165 9940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1216 04:26:56.968921 9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1216 04:26:56.991535 9940 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1216 04:26:56.991558 9940 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1216 04:26:57.004009 9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1216 04:26:57.012508 9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
I1216 04:26:57.028306 9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1216 04:26:57.034563 9940 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1216 04:26:57.034582 9940 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1216 04:26:57.060467 9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1216 04:26:57.077665 9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1216 04:26:57.099842 9940 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1216 04:26:57.099879 9940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1216 04:26:57.105430 9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1216 04:26:57.144375 9940 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1216 04:26:57.144399 9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1216 04:26:57.210328 9940 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1216 04:26:57.210362 9940 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1216 04:26:57.426856 9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1216 04:26:57.616728 9940 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1216 04:26:57.616751 9940 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1216 04:26:57.784605 9940 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1216 04:26:57.784640 9940 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1216 04:26:57.788618 9940 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1216 04:26:57.788641 9940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1216 04:26:57.806965 9940 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1216 04:26:57.806988 9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1216 04:26:57.852050 9940 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1216 04:26:57.852076 9940 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1216 04:26:58.078661 9940 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1216 04:26:58.078687 9940 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1216 04:26:58.205646 9940 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1216 04:26:58.205678 9940 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1216 04:26:58.215452 9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1216 04:26:58.238971 9940 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1216 04:26:58.239003 9940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1216 04:26:58.244015 9940 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1216 04:26:58.244040 9940 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1216 04:26:58.391330 9940 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1216 04:26:58.391349 9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1216 04:26:58.443818 9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1216 04:26:58.544295 9940 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1216 04:26:58.544319 9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1216 04:26:58.581020 9940 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1216 04:26:58.581046 9940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1216 04:26:58.765379 9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1216 04:26:59.009290 9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1216 04:26:59.069270 9940 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1216 04:26:59.069301 9940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1216 04:26:59.283310 9940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1216 04:26:59.283333 9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1216 04:26:59.577950 9940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1216 04:26:59.577987 9940 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1216 04:26:59.847299 9940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1216 04:26:59.847324 9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1216 04:27:00.102895 9940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1216 04:27:00.102917 9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1216 04:27:00.492114 9940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1216 04:27:00.492138 9940 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1216 04:27:00.895558 9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1216 04:27:02.010407 9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.106231897s)
I1216 04:27:02.010450 9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.102885366s)
I1216 04:27:02.010503 9940 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.074805991s)
I1216 04:27:02.010554 9940 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.074364287s)
I1216 04:27:02.010582 9940 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1216 04:27:02.010623 9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.041673975s)
I1216 04:27:02.010694 9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.006655948s)
I1216 04:27:02.011405 9940 node_ready.go:35] waiting up to 6m0s for node "addons-153066" to be "Ready" ...
I1216 04:27:02.112422 9940 node_ready.go:49] node "addons-153066" is "Ready"
I1216 04:27:02.112457 9940 node_ready.go:38] duration metric: took 101.002439ms for node "addons-153066" to be "Ready" ...
I1216 04:27:02.112473 9940 api_server.go:52] waiting for apiserver process to appear ...
I1216 04:27:02.112525 9940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1216 04:27:02.517553 9940 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-153066" context rescaled to 1 replicas
I1216 04:27:02.728185 9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.715638367s)
I1216 04:27:02.728240 9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.699900737s)
I1216 04:27:02.873813 9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.813304765s)
I1216 04:27:02.873881 9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.796184218s)
I1216 04:27:02.873961 9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.768496288s)
I1216 04:27:03.502735 9940 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1216 04:27:03.505452 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:27:03.505875 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:27:03.505899 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:27:03.506055 9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
I1216 04:27:03.941716 9940 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1216 04:27:04.070804 9940 addons.go:239] Setting addon gcp-auth=true in "addons-153066"
I1216 04:27:04.070876 9940 host.go:66] Checking if "addons-153066" exists ...
I1216 04:27:04.072802 9940 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1216 04:27:04.075287 9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:27:04.075723 9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
I1216 04:27:04.075745 9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
I1216 04:27:04.075927 9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
I1216 04:27:04.714121 9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.287225247s)
I1216 04:27:04.714157 9940 addons.go:495] Verifying addon ingress=true in "addons-153066"
I1216 04:27:04.714181 9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.498698095s)
I1216 04:27:04.714209 9940 addons.go:495] Verifying addon registry=true in "addons-153066"
I1216 04:27:04.714284 9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.27042975s)
I1216 04:27:04.714313 9940 addons.go:495] Verifying addon metrics-server=true in "addons-153066"
I1216 04:27:04.714372 9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.948966868s)
I1216 04:27:04.715764 9940 out.go:179] * Verifying ingress addon...
I1216 04:27:04.715794 9940 out.go:179] * Verifying registry addon...
I1216 04:27:04.716431 9940 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-153066 service yakd-dashboard -n yakd-dashboard
I1216 04:27:04.717680 9940 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1216 04:27:04.717886 9940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1216 04:27:04.816536 9940 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1216 04:27:04.816563 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:04.816738 9940 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1216 04:27:04.816760 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:05.236884 9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.227548553s)
W1216 04:27:05.236937 9940 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1216 04:27:05.236964 9940 retry.go:31] will retry after 318.718007ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1216 04:27:05.246600 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:05.246744 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:05.555903 9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1216 04:27:05.733789 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:05.733975 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:06.068386 9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.172771145s)
I1216 04:27:06.068441 9940 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-153066"
I1216 04:27:06.068481 9940 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.995648502s)
I1216 04:27:06.068410 9940 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.955863538s)
I1216 04:27:06.068531 9940 api_server.go:72] duration metric: took 9.997048865s to wait for apiserver process to appear ...
I1216 04:27:06.068548 9940 api_server.go:88] waiting for apiserver healthz status ...
I1216 04:27:06.068578 9940 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8443/healthz ...
I1216 04:27:06.070047 9940 out.go:179] * Verifying csi-hostpath-driver addon...
I1216 04:27:06.070063 9940 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1216 04:27:06.071958 9940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1216 04:27:06.073151 9940 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1216 04:27:06.074111 9940 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1216 04:27:06.074127 9940 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1216 04:27:06.090876 9940 api_server.go:279] https://192.168.39.189:8443/healthz returned 200:
ok
I1216 04:27:06.092811 9940 api_server.go:141] control plane version: v1.34.2
I1216 04:27:06.092835 9940 api_server.go:131] duration metric: took 24.279496ms to wait for apiserver health ...
I1216 04:27:06.092846 9940 system_pods.go:43] waiting for kube-system pods to appear ...
I1216 04:27:06.106737 9940 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1216 04:27:06.106757 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:06.108167 9940 system_pods.go:59] 20 kube-system pods found
I1216 04:27:06.108201 9940 system_pods.go:61] "amd-gpu-device-plugin-hhs5c" [7c605597-4044-4415-a423-ac0bc2d63d1f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1216 04:27:06.108209 9940 system_pods.go:61] "coredns-66bc5c9577-jbx8s" [0709930e-115a-4d78-b4bf-514176ebc1dc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1216 04:27:06.108219 9940 system_pods.go:61] "coredns-66bc5c9577-k5hzj" [c86aac94-7319-4717-b09c-4c5ce48d083b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1216 04:27:06.108229 9940 system_pods.go:61] "csi-hostpath-attacher-0" [c470943f-0e67-4ab6-839b-9373ba7a9393] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1216 04:27:06.108235 9940 system_pods.go:61] "csi-hostpath-resizer-0" [bff855f5-caa5-4b55-a322-a8296584227b] Pending
I1216 04:27:06.108241 9940 system_pods.go:61] "csi-hostpathplugin-82zcc" [e35aa6fc-ceba-4edc-8c51-bbee1dd678e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1216 04:27:06.108247 9940 system_pods.go:61] "etcd-addons-153066" [58bb4ccb-f65f-48ee-bdd8-11f5d4ab35d6] Running
I1216 04:27:06.108251 9940 system_pods.go:61] "kube-apiserver-addons-153066" [65425427-5f1b-456d-917f-421ffab25e59] Running
I1216 04:27:06.108255 9940 system_pods.go:61] "kube-controller-manager-addons-153066" [c827dadd-9054-4d66-a51f-ca33293eeed4] Running
I1216 04:27:06.108266 9940 system_pods.go:61] "kube-ingress-dns-minikube" [becea7ef-45d0-4bec-8470-fe1f574391a6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1216 04:27:06.108271 9940 system_pods.go:61] "kube-proxy-h5nhv" [98c8054c-fb42-44bb-96c3-b9e2b534f591] Running
I1216 04:27:06.108274 9940 system_pods.go:61] "kube-scheduler-addons-153066" [553dd44f-dd6d-44f2-b24e-fd2ac993b9d6] Running
I1216 04:27:06.108278 9940 system_pods.go:61] "metrics-server-85b7d694d7-qm9rk" [0ea4c9ef-e70d-4d40-8e23-271dbeeb59b9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1216 04:27:06.108284 9940 system_pods.go:61] "nvidia-device-plugin-daemonset-z4dn4" [3c096eaf-758d-432e-81f4-c8dfdd7b23cb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1216 04:27:06.108291 9940 system_pods.go:61] "registry-6b586f9694-bxf9q" [afd4c327-e7bf-4429-ad65-493431f56200] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1216 04:27:06.108296 9940 system_pods.go:61] "registry-creds-764b6fb674-q9m7r" [5c769b6a-2a35-4bc3-8118-0ecb8c704bcb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1216 04:27:06.108303 9940 system_pods.go:61] "registry-proxy-pbkbs" [0bced886-f1b7-415e-91d0-5f533bcfe8c0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1216 04:27:06.108309 9940 system_pods.go:61] "snapshot-controller-7d9fbc56b8-d5zsw" [66d92118-0ca5-449c-8de3-9d6e936d4145] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1216 04:27:06.108316 9940 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fmtr4" [b2b180dc-7184-400d-acd5-364d26ca2e15] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1216 04:27:06.108320 9940 system_pods.go:61] "storage-provisioner" [261052b5-937f-4f46-8238-ab5a0913c588] Running
I1216 04:27:06.108328 9940 system_pods.go:74] duration metric: took 15.476535ms to wait for pod list to return data ...
I1216 04:27:06.108338 9940 default_sa.go:34] waiting for default service account to be created ...
I1216 04:27:06.116992 9940 default_sa.go:45] found service account: "default"
I1216 04:27:06.117014 9940 default_sa.go:55] duration metric: took 8.667417ms for default service account to be created ...
I1216 04:27:06.117026 9940 system_pods.go:116] waiting for k8s-apps to be running ...
I1216 04:27:06.147263 9940 system_pods.go:86] 20 kube-system pods found
I1216 04:27:06.147306 9940 system_pods.go:89] "amd-gpu-device-plugin-hhs5c" [7c605597-4044-4415-a423-ac0bc2d63d1f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1216 04:27:06.147320 9940 system_pods.go:89] "coredns-66bc5c9577-jbx8s" [0709930e-115a-4d78-b4bf-514176ebc1dc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1216 04:27:06.147334 9940 system_pods.go:89] "coredns-66bc5c9577-k5hzj" [c86aac94-7319-4717-b09c-4c5ce48d083b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1216 04:27:06.147350 9940 system_pods.go:89] "csi-hostpath-attacher-0" [c470943f-0e67-4ab6-839b-9373ba7a9393] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1216 04:27:06.147358 9940 system_pods.go:89] "csi-hostpath-resizer-0" [bff855f5-caa5-4b55-a322-a8296584227b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1216 04:27:06.147938 9940 system_pods.go:89] "csi-hostpathplugin-82zcc" [e35aa6fc-ceba-4edc-8c51-bbee1dd678e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1216 04:27:06.147962 9940 system_pods.go:89] "etcd-addons-153066" [58bb4ccb-f65f-48ee-bdd8-11f5d4ab35d6] Running
I1216 04:27:06.147970 9940 system_pods.go:89] "kube-apiserver-addons-153066" [65425427-5f1b-456d-917f-421ffab25e59] Running
I1216 04:27:06.147975 9940 system_pods.go:89] "kube-controller-manager-addons-153066" [c827dadd-9054-4d66-a51f-ca33293eeed4] Running
I1216 04:27:06.147986 9940 system_pods.go:89] "kube-ingress-dns-minikube" [becea7ef-45d0-4bec-8470-fe1f574391a6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1216 04:27:06.147991 9940 system_pods.go:89] "kube-proxy-h5nhv" [98c8054c-fb42-44bb-96c3-b9e2b534f591] Running
I1216 04:27:06.148000 9940 system_pods.go:89] "kube-scheduler-addons-153066" [553dd44f-dd6d-44f2-b24e-fd2ac993b9d6] Running
I1216 04:27:06.148008 9940 system_pods.go:89] "metrics-server-85b7d694d7-qm9rk" [0ea4c9ef-e70d-4d40-8e23-271dbeeb59b9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1216 04:27:06.148019 9940 system_pods.go:89] "nvidia-device-plugin-daemonset-z4dn4" [3c096eaf-758d-432e-81f4-c8dfdd7b23cb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1216 04:27:06.148028 9940 system_pods.go:89] "registry-6b586f9694-bxf9q" [afd4c327-e7bf-4429-ad65-493431f56200] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1216 04:27:06.148037 9940 system_pods.go:89] "registry-creds-764b6fb674-q9m7r" [5c769b6a-2a35-4bc3-8118-0ecb8c704bcb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1216 04:27:06.148044 9940 system_pods.go:89] "registry-proxy-pbkbs" [0bced886-f1b7-415e-91d0-5f533bcfe8c0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1216 04:27:06.148057 9940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d5zsw" [66d92118-0ca5-449c-8de3-9d6e936d4145] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1216 04:27:06.148066 9940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fmtr4" [b2b180dc-7184-400d-acd5-364d26ca2e15] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1216 04:27:06.148073 9940 system_pods.go:89] "storage-provisioner" [261052b5-937f-4f46-8238-ab5a0913c588] Running
I1216 04:27:06.148083 9940 system_pods.go:126] duration metric: took 31.049946ms to wait for k8s-apps to be running ...
I1216 04:27:06.148097 9940 system_svc.go:44] waiting for kubelet service to be running ....
I1216 04:27:06.148157 9940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1216 04:27:06.227563 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:06.229510 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:06.233951 9940 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1216 04:27:06.233973 9940 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1216 04:27:06.372451 9940 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1216 04:27:06.372483 9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1216 04:27:06.463549 9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1216 04:27:06.579097 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:06.729528 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:06.730640 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:07.079988 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:07.224305 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:07.224619 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:07.586047 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:07.726254 9940 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.57806945s)
I1216 04:27:07.726288 9940 system_svc.go:56] duration metric: took 1.578188264s WaitForService to wait for kubelet
I1216 04:27:07.726299 9940 kubeadm.go:587] duration metric: took 11.654816881s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1216 04:27:07.726320 9940 node_conditions.go:102] verifying NodePressure condition ...
I1216 04:27:07.726257 9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.170313303s)
I1216 04:27:07.735589 9940 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1216 04:27:07.735628 9940 node_conditions.go:123] node cpu capacity is 2
I1216 04:27:07.735646 9940 node_conditions.go:105] duration metric: took 9.319252ms to run NodePressure ...
I1216 04:27:07.735660 9940 start.go:242] waiting for startup goroutines ...
I1216 04:27:07.737610 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:07.738700 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:07.842357 9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.378763722s)
I1216 04:27:07.843404 9940 addons.go:495] Verifying addon gcp-auth=true in "addons-153066"
I1216 04:27:07.844974 9940 out.go:179] * Verifying gcp-auth addon...
I1216 04:27:07.847187 9940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1216 04:27:07.860248 9940 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1216 04:27:07.860263 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:08.081522 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:08.229931 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:08.229975 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:08.354239 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:08.585379 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:08.725689 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:08.726166 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:08.851479 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:09.081006 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:09.223323 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:09.224480 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:09.351925 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:09.580385 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:09.725546 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:09.727097 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:09.854767 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:10.077526 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:10.221816 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:10.221911 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:10.352037 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:10.576414 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:10.722668 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:10.723007 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:10.877245 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:11.079244 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:11.225203 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:11.228072 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:11.350727 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:11.577161 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:11.728047 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:11.728614 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:11.851574 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:12.076097 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:12.220907 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:12.222324 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:12.350214 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:12.575932 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:12.723074 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:12.723240 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:12.850851 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:13.075863 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:13.220817 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:13.221612 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:13.350670 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:13.576941 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:13.725823 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:13.725993 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:13.853435 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:14.076182 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:14.222838 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:14.225256 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:14.352084 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:14.576599 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:14.722332 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:14.723211 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:14.850564 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:15.076790 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:15.220742 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:15.220985 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:15.351581 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:15.577401 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:15.722521 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:15.723609 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:15.853013 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:16.077400 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:16.221762 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:16.221994 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:16.352150 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:16.575739 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:16.721286 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:16.721893 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:16.852680 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:17.078497 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:17.225595 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:17.225759 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:17.351626 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:17.576240 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:17.724349 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:17.726740 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:17.852188 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:18.077969 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:18.221321 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:18.224877 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:18.351719 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:18.576673 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:18.724493 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:18.725497 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:18.850126 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:19.082174 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:19.223797 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:19.225352 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:19.353700 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:19.576626 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:19.722718 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:19.723219 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:19.886622 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:20.077427 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:20.224462 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:20.226403 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:20.353518 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:20.578299 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:20.723960 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:20.724361 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:20.855977 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:21.075324 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:21.222389 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:21.224041 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:21.706026 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:21.706056 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:21.723412 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:21.723989 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:21.853250 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:22.077245 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:22.222950 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:22.224807 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:22.351170 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:22.578517 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:22.723743 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:22.726794 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:22.855294 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:23.077111 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:23.224949 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:23.226755 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:23.354055 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:23.577894 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:23.829228 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:23.829481 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:23.850186 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:24.085789 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:24.221843 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:24.223242 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:24.350845 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:24.576144 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:24.722460 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:24.722876 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:24.855717 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:25.078789 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:25.221847 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:25.223197 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:25.350288 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:25.575869 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:25.720994 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:25.721191 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:25.849886 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:26.075686 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:26.221033 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:26.221248 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:26.350254 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:26.576553 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:26.723542 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:26.724618 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:26.854821 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:27.075265 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:27.223181 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:27.223861 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:27.357203 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:27.576066 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:27.725031 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:27.727482 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:27.855550 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:28.077528 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:28.221782 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:28.222025 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:28.352446 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:28.578028 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:28.722358 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:28.722827 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:28.851145 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:29.076321 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:29.224578 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:29.225071 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:29.350911 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:29.575438 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:29.721700 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:29.722746 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:29.850882 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:30.075214 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:30.222969 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:30.223232 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:30.350134 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:30.575638 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:30.721474 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:30.722054 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:30.851134 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:31.076490 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:31.221564 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:31.221915 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:31.350891 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:31.576446 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:31.726903 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:31.729195 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:31.853394 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:32.078515 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:32.225337 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:32.229676 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:32.352306 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:32.578592 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:32.721947 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:32.722475 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:32.850940 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:33.079376 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:33.221359 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:33.221871 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:33.352444 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:33.576369 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:33.721322 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:33.722062 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:33.850623 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:34.076802 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:34.225793 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:34.226352 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:34.351416 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:34.590463 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:34.925817 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:34.925948 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:34.926028 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:35.077072 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:35.224215 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:35.225571 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:35.351932 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:35.576011 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:35.721458 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:35.721715 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:35.850967 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:36.075912 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:36.221210 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:36.221576 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:36.351236 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:36.576512 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:36.722383 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:36.722450 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:36.850551 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:37.075848 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:37.221365 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:37.221526 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:37.350638 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:37.575806 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:37.721253 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:37.721380 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:37.851063 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:38.075583 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:38.224329 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:38.225831 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:38.354787 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:38.578394 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:38.725450 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:38.725640 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:38.852229 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:39.075758 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:39.220907 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:39.221892 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:39.355005 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:39.578472 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:39.722363 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 04:27:39.722836 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:39.851569 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:40.076509 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:40.222384 9940 kapi.go:107] duration metric: took 35.504494079s to wait for kubernetes.io/minikube-addons=registry ...
I1216 04:27:40.222624 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:40.350940 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:40.575464 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:40.721894 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:40.850684 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:41.077410 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:41.227906 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:41.351684 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:41.576530 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:41.721310 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:41.850570 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:42.079845 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:42.222444 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:42.352569 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:42.578383 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:42.722408 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:42.850414 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:43.077377 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:43.221359 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:43.351831 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:43.576276 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:43.721690 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:43.851404 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:44.082117 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:44.220732 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:44.351974 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:44.576364 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:44.874716 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:44.877662 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:45.077589 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:45.222331 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:45.350369 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:45.576421 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:45.722330 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:45.851234 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:46.079881 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:46.222360 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:46.353880 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:46.579475 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:46.723915 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:46.853013 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:47.076039 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:47.225939 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:47.354615 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:47.577391 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:47.721609 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:47.852242 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:48.076635 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:48.221572 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:48.350535 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:48.578813 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:48.721708 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:48.850097 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:49.076543 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:49.222157 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:49.351049 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:49.578746 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:49.721783 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:50.311943 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:50.312001 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:50.312227 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:50.353896 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:50.575133 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:50.723766 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:50.851339 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:51.077966 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:51.220842 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:51.350808 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:51.576748 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:51.720575 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:51.850538 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:52.077089 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:52.221660 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:52.354643 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:52.575016 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:52.723673 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:52.850367 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:53.081895 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:53.225204 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:53.353603 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:53.577462 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:53.722119 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:53.851971 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:54.078279 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:54.222084 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:54.352499 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:54.577547 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:54.722655 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:54.851060 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:55.077896 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:55.227930 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:55.464752 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:55.579788 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:55.722068 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:55.851870 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:56.076238 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:56.226962 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:56.356487 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:56.580230 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:56.723727 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:56.850521 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:57.091411 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:57.225587 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:57.351524 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:57.577676 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:57.722910 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:57.851158 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:58.078039 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:58.224177 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:58.352508 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:58.579280 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:58.722928 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:58.855117 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:59.080798 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:59.222647 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:59.350630 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:27:59.581464 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:27:59.727360 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:27:59.850274 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:00.076395 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:00.222833 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:00.352571 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:00.577415 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:00.721964 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:00.851188 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:01.075290 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:01.223344 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:01.353310 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:01.577232 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:01.722097 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:01.850964 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:02.079828 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:02.226810 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:02.350596 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:02.576410 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:02.722661 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:02.853332 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:03.077157 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:03.223329 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:03.353115 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:03.668906 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:03.723801 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:03.851609 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:04.079433 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:04.222034 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:04.350990 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:04.576845 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:04.739438 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:04.852864 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:05.074875 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:05.226667 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:05.352222 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:05.576994 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:05.723186 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:05.851113 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:06.079813 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:06.221577 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:06.354571 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:06.584103 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:06.721924 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:06.852050 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:07.082112 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:07.221812 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:07.351368 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:07.582263 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:07.723402 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:07.852185 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:08.075812 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:08.221720 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:08.350751 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:08.580432 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:08.724146 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:08.855595 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:09.078720 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:09.226580 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:09.355612 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:09.576522 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:09.723332 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:10.038449 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:10.078920 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:10.225141 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:10.352350 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:10.577100 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:10.723766 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:10.853646 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:11.077374 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:11.221529 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:11.355944 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:11.579651 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:11.731522 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:11.853323 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:12.077802 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:12.226870 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:12.355234 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:12.576652 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:12.730421 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:12.851945 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:13.081415 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:13.226658 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:13.353243 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:13.579099 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:13.723065 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:13.852418 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:14.077265 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:14.224651 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:14.353055 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:14.576549 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:14.722972 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:14.851547 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:15.077576 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:15.222673 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:15.353491 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:15.578329 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:15.722425 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:15.850218 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:16.078573 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:16.222157 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:16.354677 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:16.575665 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:16.722972 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:16.852377 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:17.078904 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:17.222215 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:17.353737 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:17.575806 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:17.723258 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:17.850679 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:18.079604 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:18.228599 9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 04:28:18.352169 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:18.580171 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:18.732208 9940 kapi.go:107] duration metric: took 1m14.014526101s to wait for app.kubernetes.io/name=ingress-nginx ...
I1216 04:28:18.851292 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:19.078359 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:19.352220 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:19.575997 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:19.886282 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:20.076413 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:20.350519 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:20.576792 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 04:28:20.851888 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:21.077715 9940 kapi.go:107] duration metric: took 1m15.005755537s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1216 04:28:21.352626 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:21.851864 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:22.350254 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:22.851924 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:23.352766 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:23.851889 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:24.350527 9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 04:28:24.852310 9940 kapi.go:107] duration metric: took 1m17.005122808s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1216 04:28:24.853865 9940 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-153066 cluster.
I1216 04:28:24.854911 9940 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1216 04:28:24.855909 9940 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1216 04:28:24.857104 9940 out.go:179] * Enabled addons: ingress-dns, amd-gpu-device-plugin, registry-creds, default-storageclass, inspektor-gadget, nvidia-device-plugin, storage-provisioner, cloud-spanner, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
I1216 04:28:24.858189 9940 addons.go:530] duration metric: took 1m28.786633957s for enable addons: enabled=[ingress-dns amd-gpu-device-plugin registry-creds default-storageclass inspektor-gadget nvidia-device-plugin storage-provisioner cloud-spanner storage-provisioner-rancher metrics-server yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
I1216 04:28:24.858230 9940 start.go:247] waiting for cluster config update ...
I1216 04:28:24.858252 9940 start.go:256] writing updated cluster config ...
I1216 04:28:24.858511 9940 ssh_runner.go:195] Run: rm -f paused
I1216 04:28:24.865606 9940 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1216 04:28:24.868591 9940 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-k5hzj" in "kube-system" namespace to be "Ready" or be gone ...
I1216 04:28:24.873106 9940 pod_ready.go:94] pod "coredns-66bc5c9577-k5hzj" is "Ready"
I1216 04:28:24.873122 9940 pod_ready.go:86] duration metric: took 4.512807ms for pod "coredns-66bc5c9577-k5hzj" in "kube-system" namespace to be "Ready" or be gone ...
I1216 04:28:24.875497 9940 pod_ready.go:83] waiting for pod "etcd-addons-153066" in "kube-system" namespace to be "Ready" or be gone ...
I1216 04:28:24.880402 9940 pod_ready.go:94] pod "etcd-addons-153066" is "Ready"
I1216 04:28:24.880418 9940 pod_ready.go:86] duration metric: took 4.903012ms for pod "etcd-addons-153066" in "kube-system" namespace to be "Ready" or be gone ...
I1216 04:28:24.883591 9940 pod_ready.go:83] waiting for pod "kube-apiserver-addons-153066" in "kube-system" namespace to be "Ready" or be gone ...
I1216 04:28:24.888868 9940 pod_ready.go:94] pod "kube-apiserver-addons-153066" is "Ready"
I1216 04:28:24.888898 9940 pod_ready.go:86] duration metric: took 5.283388ms for pod "kube-apiserver-addons-153066" in "kube-system" namespace to be "Ready" or be gone ...
I1216 04:28:24.891390 9940 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-153066" in "kube-system" namespace to be "Ready" or be gone ...
I1216 04:28:25.270569 9940 pod_ready.go:94] pod "kube-controller-manager-addons-153066" is "Ready"
I1216 04:28:25.270595 9940 pod_ready.go:86] duration metric: took 379.184872ms for pod "kube-controller-manager-addons-153066" in "kube-system" namespace to be "Ready" or be gone ...
I1216 04:28:25.471487 9940 pod_ready.go:83] waiting for pod "kube-proxy-h5nhv" in "kube-system" namespace to be "Ready" or be gone ...
I1216 04:28:25.870658 9940 pod_ready.go:94] pod "kube-proxy-h5nhv" is "Ready"
I1216 04:28:25.870685 9940 pod_ready.go:86] duration metric: took 399.174437ms for pod "kube-proxy-h5nhv" in "kube-system" namespace to be "Ready" or be gone ...
I1216 04:28:26.070258 9940 pod_ready.go:83] waiting for pod "kube-scheduler-addons-153066" in "kube-system" namespace to be "Ready" or be gone ...
I1216 04:28:26.470288 9940 pod_ready.go:94] pod "kube-scheduler-addons-153066" is "Ready"
I1216 04:28:26.470331 9940 pod_ready.go:86] duration metric: took 400.047581ms for pod "kube-scheduler-addons-153066" in "kube-system" namespace to be "Ready" or be gone ...
I1216 04:28:26.470343 9940 pod_ready.go:40] duration metric: took 1.60471408s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1216 04:28:26.515117 9940 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
I1216 04:28:26.517092 9940 out.go:179] * Done! kubectl is now configured to use "addons-153066" cluster and "default" namespace by default
==> CRI-O <==
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.002019271Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: a2654019-ae9a-44ed-ba5e-6eea0488c198,},},}" file="otel-collector/interceptors.go:62" id=b7464afb-69e7-48ba-872c-728d271fead4 name=/runtime.v1.RuntimeService/ListContainers
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.002086578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7464afb-69e7-48ba-872c-728d271fead4 name=/runtime.v1.RuntimeService/ListContainers
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.002129046Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b7464afb-69e7-48ba-872c-728d271fead4 name=/runtime.v1.RuntimeService/ListContainers
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.017824820Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=80b687bf-24cd-47ac-9b5f-165cc6389cd1 name=/runtime.v1.RuntimeService/Version
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.017928178Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=80b687bf-24cd-47ac-9b5f-165cc6389cd1 name=/runtime.v1.RuntimeService/Version
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.019581703Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=19097fdc-cc35-4711-b45e-e441f3ed4864 name=/runtime.v1.ImageService/ImageFsInfo
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.020803028Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765859490020779842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19097fdc-cc35-4711-b45e-e441f3ed4864 name=/runtime.v1.ImageService/ImageFsInfo
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.021768202Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e707ed32-af99-44e9-a407-de2a9b62cdb9 name=/runtime.v1.RuntimeService/ListContainers
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.021825592Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e707ed32-af99-44e9-a407-de2a9b62cdb9 name=/runtime.v1.RuntimeService/ListContainers
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.022122016Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9c809cddb7112d1c7f70c743a875a56ab59b90619641c72d688e3ba2d24ac3a,PodSandboxId:17eb35aa9b53a0cbfe496edc48b52aae8c351cd263c852599db6b66850208570,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765859348228109676,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b42be6a9-0973-4607-a39f-f43345bc18fe,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da1899bf0f3725a259538db15d9fcc9b1551687d0ddb914b38868eeb0ea596e2,PodSandboxId:3057e156546032d0b91e4b2a3110f83f38627941d3cb682f610b07a868e47f75,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765859311004531465,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e064416f-1c71-491d-b296-b0861bd3abce,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f2c27cd73ef41acf51f70ed2f2a53463e9cadb906a521bc2a9679053975ca9,PodSandboxId:2d7797ab913e2012f29dc08e8701662f5934ae23c91b89a2a821b51e92857193,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765859298209918222,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-w5fvb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7a81b0ad-2518-45a8-912f-6dc296e4f3fd,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:651b3c32c1f8afca500fbe414cf50f890236f2a90dd3c0369135285365c30c42,PodSandboxId:37271fa0c448ebcf5e86caef7f50024b856ec2fd142e3b06d605e4d67580da73,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765859277123476412,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cjxvw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59f3ba53-c633-46ba-85da-f30ba2227661,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:485e0f16c003025a84dc9dbceddbcbbfcf5f54e6d123ae985a2eb702d3d7bd60,PodSandboxId:53be126e98c2d62d3d28b7c196d30c305aef01ee1fec0fd42c3c7519f5b31c39,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765859276991069254,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7tk55,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a2f1a313-0984-4131-9870-722d9503ac19,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4d570fbd10c2bc44ada6c782c62d1a117dab9964193005f6b32acdc6b37aea,PodSandboxId:0aa876268e46b217471958e00ced8d346769164ee6c78a961b003a696bf54604,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765859244093593480,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: becea7ef-45d0-4bec-8470-fe1f574391a6,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c53d024caa8b10e97f1f0e7d346d0bb5299e8cc00164b4b3771a24398d8fc43d,PodSandboxId:af30bd1e8cf2e112556350a54dff781fc27dbedadaeb2a3a9ceecc810e6f2e36,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765859234500420408,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hhs5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c605597-4044-4415-a423-ac0bc2d63d1f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d065f232053c70c374d73496a42830ab0ba8afe9511c424efc1c7b52d7024ab4,PodSandboxId:7e7dc1959db100062964b803f1ebf21880904343769d85239600546e8fb1547b,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765859225045969565,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 261052b5-937f-4f46-8238-ab5a0913c588,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572d7b3b73779232f337aadde20111aa325376ffc431153129764d474c3172f1,PodSandboxId:82443fd12107522a7421d02a683c0eb20e501025efb36d6e8b1c5aa2af8053b3,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765859217549240089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-k5hzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c86aac94-7319-4717-b09c-4c5ce48d083b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2da8aa9cf49d54a771b62b0b36d4dcca12b05afaa1ae334b8ddc6f491c8d26a,PodSandboxId:39dfd92507902a4f0ae2045a138312531afe92d81dcdb55807b374192ee791e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765859216974023700,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5nhv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c8054c-fb42-44bb-96c3-b9e2b534f591,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6f75243dd73b8f78264b726f85d4cefe141c6c8fb29f25f86e1e352c2302c5,PodSandboxId:cea2cbe33776e1a18bc2558f40b02902704d8f9e691c99a63bccb5a845297286,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765859205307896383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dcda41444dbf89830a69aa2ef3ed2fa,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:596fad690cd4ff248141fb728834feee5769f975593c16b2c7310569225b0a05,PodSandboxId:7ed0f7d74790722c5225f0ee9a4c49794cd09eb16ec34ae331c0b6346d001613,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765859205315782468,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d396e6c09d971f3da5ab405f520ebf96,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a668ea7a775ace807986d650432e9486841b1694cc7e3cea4aa90f9db74d4d26,PodSandboxId:2b687031565d0d353a82b315bdc7a9a49a11ae2184f22f5fd9c6dc453c8a900f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765859205296822726,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328dfa107a11db8f9546f472798d351e,},Annotations:m
ap[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3920403bed4db8081a62b794d286d69772e0b066f21464c4462b0e238f3c104f,PodSandboxId:3c0d13efc0a7352880e11f349d731bb791fb853e0e24804081ea4d1dec39ce15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765859205264336254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 481d5acbe07932cec43964b40b18e484,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e707ed32-af99-44e9-a407-de2a9b62cdb9 name=/runtime.v1.RuntimeService/ListContainers
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.045253807Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.056743749Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df4aed0a-2629-469e-ae36-1797ef43c9e8 name=/runtime.v1.RuntimeService/Version
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.056818713Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df4aed0a-2629-469e-ae36-1797ef43c9e8 name=/runtime.v1.RuntimeService/Version
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.058634055Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97c92f94-a545-4521-8c08-cb27a8491a36 name=/runtime.v1.ImageService/ImageFsInfo
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.060152308Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765859490060123354,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97c92f94-a545-4521-8c08-cb27a8491a36 name=/runtime.v1.ImageService/ImageFsInfo
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.061083136Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d9b1f941-370a-478e-a6af-dd2c79da1ea4 name=/runtime.v1.RuntimeService/ListContainers
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.061152434Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d9b1f941-370a-478e-a6af-dd2c79da1ea4 name=/runtime.v1.RuntimeService/ListContainers
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.061708699Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9c809cddb7112d1c7f70c743a875a56ab59b90619641c72d688e3ba2d24ac3a,PodSandboxId:17eb35aa9b53a0cbfe496edc48b52aae8c351cd263c852599db6b66850208570,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765859348228109676,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b42be6a9-0973-4607-a39f-f43345bc18fe,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da1899bf0f3725a259538db15d9fcc9b1551687d0ddb914b38868eeb0ea596e2,PodSandboxId:3057e156546032d0b91e4b2a3110f83f38627941d3cb682f610b07a868e47f75,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765859311004531465,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e064416f-1c71-491d-b296-b0861bd3abce,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f2c27cd73ef41acf51f70ed2f2a53463e9cadb906a521bc2a9679053975ca9,PodSandboxId:2d7797ab913e2012f29dc08e8701662f5934ae23c91b89a2a821b51e92857193,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765859298209918222,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-w5fvb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7a81b0ad-2518-45a8-912f-6dc296e4f3fd,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:651b3c32c1f8afca500fbe414cf50f890236f2a90dd3c0369135285365c30c42,PodSandboxId:37271fa0c448ebcf5e86caef7f50024b856ec2fd142e3b06d605e4d67580da73,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765859277123476412,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cjxvw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59f3ba53-c633-46ba-85da-f30ba2227661,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:485e0f16c003025a84dc9dbceddbcbbfcf5f54e6d123ae985a2eb702d3d7bd60,PodSandboxId:53be126e98c2d62d3d28b7c196d30c305aef01ee1fec0fd42c3c7519f5b31c39,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765859276991069254,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7tk55,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a2f1a313-0984-4131-9870-722d9503ac19,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4d570fbd10c2bc44ada6c782c62d1a117dab9964193005f6b32acdc6b37aea,PodSandboxId:0aa876268e46b217471958e00ced8d346769164ee6c78a961b003a696bf54604,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765859244093593480,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: becea7ef-45d0-4bec-8470-fe1f574391a6,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c53d024caa8b10e97f1f0e7d346d0bb5299e8cc00164b4b3771a24398d8fc43d,PodSandboxId:af30bd1e8cf2e112556350a54dff781fc27dbedadaeb2a3a9ceecc810e6f2e36,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765859234500420408,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hhs5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c605597-4044-4415-a423-ac0bc2d63d1f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d065f232053c70c374d73496a42830ab0ba8afe9511c424efc1c7b52d7024ab4,PodSandboxId:7e7dc1959db100062964b803f1ebf21880904343769d85239600546e8fb1547b,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765859225045969565,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 261052b5-937f-4f46-8238-ab5a0913c588,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572d7b3b73779232f337aadde20111aa325376ffc431153129764d474c3172f1,PodSandboxId:82443fd12107522a7421d02a683c0eb20e501025efb36d6e8b1c5aa2af8053b3,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765859217549240089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-k5hzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c86aac94-7319-4717-b09c-4c5ce48d083b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2da8aa9cf49d54a771b62b0b36d4dcca12b05afaa1ae334b8ddc6f491c8d26a,PodSandboxId:39dfd92507902a4f0ae2045a138312531afe92d81dcdb55807b374192ee791e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765859216974023700,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5nhv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c8054c-fb42-44bb-96c3-b9e2b534f591,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6f75243dd73b8f78264b726f85d4cefe141c6c8fb29f25f86e1e352c2302c5,PodSandboxId:cea2cbe33776e1a18bc2558f40b02902704d8f9e691c99a63bccb5a845297286,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765859205307896383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dcda41444dbf89830a69aa2ef3ed2fa,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:596fad690cd4ff248141fb728834feee5769f975593c16b2c7310569225b0a05,PodSandboxId:7ed0f7d74790722c5225f0ee9a4c49794cd09eb16ec34ae331c0b6346d001613,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765859205315782468,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d396e6c09d971f3da5ab405f520ebf96,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a668ea7a775ace807986d650432e9486841b1694cc7e3cea4aa90f9db74d4d26,PodSandboxId:2b687031565d0d353a82b315bdc7a9a49a11ae2184f22f5fd9c6dc453c8a900f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765859205296822726,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328dfa107a11db8f9546f472798d351e,},Annotations:m
ap[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3920403bed4db8081a62b794d286d69772e0b066f21464c4462b0e238f3c104f,PodSandboxId:3c0d13efc0a7352880e11f349d731bb791fb853e0e24804081ea4d1dec39ce15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765859205264336254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 481d5acbe07932cec43964b40b18e484,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d9b1f941-370a-478e-a6af-dd2c79da1ea4 name=/runtime.v1.RuntimeService/ListContainers
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.094938523Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59f14ae1-7f05-4fd6-8ed3-9a70923fc17e name=/runtime.v1.RuntimeService/Version
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.095018739Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59f14ae1-7f05-4fd6-8ed3-9a70923fc17e name=/runtime.v1.RuntimeService/Version
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.096593062Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50cb59e6-01a8-42a4-9ac2-1dcd7eb80f7a name=/runtime.v1.ImageService/ImageFsInfo
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.098390966Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765859490098359880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50cb59e6-01a8-42a4-9ac2-1dcd7eb80f7a name=/runtime.v1.ImageService/ImageFsInfo
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.099671400Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a48df7d-b113-45bc-8953-828e5454deac name=/runtime.v1.RuntimeService/ListContainers
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.099926988Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a48df7d-b113-45bc-8953-828e5454deac name=/runtime.v1.RuntimeService/ListContainers
Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.100506298Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9c809cddb7112d1c7f70c743a875a56ab59b90619641c72d688e3ba2d24ac3a,PodSandboxId:17eb35aa9b53a0cbfe496edc48b52aae8c351cd263c852599db6b66850208570,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765859348228109676,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b42be6a9-0973-4607-a39f-f43345bc18fe,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da1899bf0f3725a259538db15d9fcc9b1551687d0ddb914b38868eeb0ea596e2,PodSandboxId:3057e156546032d0b91e4b2a3110f83f38627941d3cb682f610b07a868e47f75,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765859311004531465,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e064416f-1c71-491d-b296-b0861bd3abce,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f2c27cd73ef41acf51f70ed2f2a53463e9cadb906a521bc2a9679053975ca9,PodSandboxId:2d7797ab913e2012f29dc08e8701662f5934ae23c91b89a2a821b51e92857193,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765859298209918222,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-w5fvb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7a81b0ad-2518-45a8-912f-6dc296e4f3fd,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:651b3c32c1f8afca500fbe414cf50f890236f2a90dd3c0369135285365c30c42,PodSandboxId:37271fa0c448ebcf5e86caef7f50024b856ec2fd142e3b06d605e4d67580da73,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765859277123476412,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cjxvw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59f3ba53-c633-46ba-85da-f30ba2227661,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:485e0f16c003025a84dc9dbceddbcbbfcf5f54e6d123ae985a2eb702d3d7bd60,PodSandboxId:53be126e98c2d62d3d28b7c196d30c305aef01ee1fec0fd42c3c7519f5b31c39,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765859276991069254,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7tk55,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a2f1a313-0984-4131-9870-722d9503ac19,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4d570fbd10c2bc44ada6c782c62d1a117dab9964193005f6b32acdc6b37aea,PodSandboxId:0aa876268e46b217471958e00ced8d346769164ee6c78a961b003a696bf54604,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765859244093593480,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: becea7ef-45d0-4bec-8470-fe1f574391a6,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c53d024caa8b10e97f1f0e7d346d0bb5299e8cc00164b4b3771a24398d8fc43d,PodSandboxId:af30bd1e8cf2e112556350a54dff781fc27dbedadaeb2a3a9ceecc810e6f2e36,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765859234500420408,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hhs5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c605597-4044-4415-a423-ac0bc2d63d1f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d065f232053c70c374d73496a42830ab0ba8afe9511c424efc1c7b52d7024ab4,PodSandboxId:7e7dc1959db100062964b803f1ebf21880904343769d85239600546e8fb1547b,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765859225045969565,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 261052b5-937f-4f46-8238-ab5a0913c588,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572d7b3b73779232f337aadde20111aa325376ffc431153129764d474c3172f1,PodSandboxId:82443fd12107522a7421d02a683c0eb20e501025efb36d6e8b1c5aa2af8053b3,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765859217549240089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-k5hzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c86aac94-7319-4717-b09c-4c5ce48d083b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2da8aa9cf49d54a771b62b0b36d4dcca12b05afaa1ae334b8ddc6f491c8d26a,PodSandboxId:39dfd92507902a4f0ae2045a138312531afe92d81dcdb55807b374192ee791e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765859216974023700,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5nhv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c8054c-fb42-44bb-96c3-b9e2b534f591,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6f75243dd73b8f78264b726f85d4cefe141c6c8fb29f25f86e1e352c2302c5,PodSandboxId:cea2cbe33776e1a18bc2558f40b02902704d8f9e691c99a63bccb5a845297286,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765859205307896383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dcda41444dbf89830a69aa2ef3ed2fa,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:596fad690cd4ff248141fb728834feee5769f975593c16b2c7310569225b0a05,PodSandboxId:7ed0f7d74790722c5225f0ee9a4c49794cd09eb16ec34ae331c0b6346d001613,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765859205315782468,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d396e6c09d971f3da5ab405f520ebf96,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a668ea7a775ace807986d650432e9486841b1694cc7e3cea4aa90f9db74d4d26,PodSandboxId:2b687031565d0d353a82b315bdc7a9a49a11ae2184f22f5fd9c6dc453c8a900f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765859205296822726,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328dfa107a11db8f9546f472798d351e,},Annotations:m
ap[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3920403bed4db8081a62b794d286d69772e0b066f21464c4462b0e238f3c104f,PodSandboxId:3c0d13efc0a7352880e11f349d731bb791fb853e0e24804081ea4d1dec39ce15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765859205264336254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 481d5acbe07932cec43964b40b18e484,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a48df7d-b113-45bc-8953-828e5454deac name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
b9c809cddb711 public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff 2 minutes ago Running nginx 0 17eb35aa9b53a nginx default
da1899bf0f372 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 2 minutes ago Running busybox 0 3057e15654603 busybox default
d6f2c27cd73ef registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad 3 minutes ago Running controller 0 2d7797ab913e2 ingress-nginx-controller-85d4c799dd-w5fvb ingress-nginx
651b3c32c1f8a registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285 3 minutes ago Exited patch 0 37271fa0c448e ingress-nginx-admission-patch-cjxvw ingress-nginx
485e0f16c0030 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285 3 minutes ago Exited create 0 53be126e98c2d ingress-nginx-admission-create-7tk55 ingress-nginx
5a4d570fbd10c docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 4 minutes ago Running minikube-ingress-dns 0 0aa876268e46b kube-ingress-dns-minikube kube-system
c53d024caa8b1 docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 af30bd1e8cf2e amd-gpu-device-plugin-hhs5c kube-system
d065f232053c7 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 7e7dc1959db10 storage-provisioner kube-system
572d7b3b73779 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 82443fd121075 coredns-66bc5c9577-k5hzj kube-system
b2da8aa9cf49d 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45 4 minutes ago Running kube-proxy 0 39dfd92507902 kube-proxy-h5nhv kube-system
596fad690cd4f 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952 4 minutes ago Running kube-scheduler 0 7ed0f7d747907 kube-scheduler-addons-153066 kube-system
4a6f75243dd73 a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1 4 minutes ago Running etcd 0 cea2cbe33776e etcd-addons-153066 kube-system
a668ea7a775ac a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85 4 minutes ago Running kube-apiserver 0 2b687031565d0 kube-apiserver-addons-153066 kube-system
3920403bed4db 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8 4 minutes ago Running kube-controller-manager 0 3c0d13efc0a73 kube-controller-manager-addons-153066 kube-system
==> coredns [572d7b3b73779232f337aadde20111aa325376ffc431153129764d474c3172f1] <==
[INFO] 10.244.0.8:51008 - 36670 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00044604s
[INFO] 10.244.0.8:51008 - 30170 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000136714s
[INFO] 10.244.0.8:51008 - 45330 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00015605s
[INFO] 10.244.0.8:51008 - 11419 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000213065s
[INFO] 10.244.0.8:51008 - 2807 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000211715s
[INFO] 10.244.0.8:51008 - 32361 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000309435s
[INFO] 10.244.0.8:51008 - 25917 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000119812s
[INFO] 10.244.0.8:59063 - 61784 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000106261s
[INFO] 10.244.0.8:59063 - 62099 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000293636s
[INFO] 10.244.0.8:57569 - 48647 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000073974s
[INFO] 10.244.0.8:57569 - 48330 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00007519s
[INFO] 10.244.0.8:33855 - 20295 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000173748s
[INFO] 10.244.0.8:33855 - 20567 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000125821s
[INFO] 10.244.0.8:42237 - 24256 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000112922s
[INFO] 10.244.0.8:42237 - 24003 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000427458s
[INFO] 10.244.0.23:47614 - 8014 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001636225s
[INFO] 10.244.0.23:41544 - 61392 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000697809s
[INFO] 10.244.0.23:44766 - 37939 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000193283s
[INFO] 10.244.0.23:47130 - 44624 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000115496s
[INFO] 10.244.0.23:36193 - 53897 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000205927s
[INFO] 10.244.0.23:53679 - 44985 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000238401s
[INFO] 10.244.0.23:37277 - 51910 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001583031s
[INFO] 10.244.0.23:52153 - 31072 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00358856s
[INFO] 10.244.0.26:37386 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000302807s
[INFO] 10.244.0.26:51712 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000164445s
==> describe nodes <==
Name: addons-153066
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-153066
kubernetes.io/os=linux
minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
minikube.k8s.io/name=addons-153066
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_16T04_26_51_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-153066
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 16 Dec 2025 04:26:48 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-153066
AcquireTime: <unset>
RenewTime: Tue, 16 Dec 2025 04:31:26 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 16 Dec 2025 04:29:34 +0000 Tue, 16 Dec 2025 04:26:45 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 16 Dec 2025 04:29:34 +0000 Tue, 16 Dec 2025 04:26:45 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 16 Dec 2025 04:29:34 +0000 Tue, 16 Dec 2025 04:26:45 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 16 Dec 2025 04:29:34 +0000 Tue, 16 Dec 2025 04:26:51 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.189
Hostname: addons-153066
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
System Info:
Machine ID: b9b6581480b74e0a92c84d21ede24ac3
System UUID: b9b65814-80b7-4e0a-92c8-4d21ede24ac3
Boot ID: c252c32f-203c-4e98-a15c-5bb5727105f2
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.2
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m3s
default hello-world-app-5d498dc89-7bj4k 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m29s
ingress-nginx ingress-nginx-controller-85d4c799dd-w5fvb 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m26s
kube-system amd-gpu-device-plugin-hhs5c 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m31s
kube-system coredns-66bc5c9577-k5hzj 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m34s
kube-system etcd-addons-153066 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m40s
kube-system kube-apiserver-addons-153066 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m41s
kube-system kube-controller-manager-addons-153066 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m40s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m29s
kube-system kube-proxy-h5nhv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m35s
kube-system kube-scheduler-addons-153066 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m40s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m29s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m32s kube-proxy
Normal Starting 4m47s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m46s (x8 over 4m47s) kubelet Node addons-153066 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m46s (x8 over 4m47s) kubelet Node addons-153066 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m46s (x7 over 4m47s) kubelet Node addons-153066 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m46s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m40s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m40s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m40s kubelet Node addons-153066 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m40s kubelet Node addons-153066 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m40s kubelet Node addons-153066 status is now: NodeHasSufficientPID
Normal NodeReady 4m39s kubelet Node addons-153066 status is now: NodeReady
Normal RegisteredNode 4m36s node-controller Node addons-153066 event: Registered Node addons-153066 in Controller
==> dmesg <==
[ +0.103587] kauditd_printk_skb: 437 callbacks suppressed
[ +5.929153] kauditd_printk_skb: 245 callbacks suppressed
[ +9.589881] kauditd_printk_skb: 5 callbacks suppressed
[ +6.679554] kauditd_printk_skb: 20 callbacks suppressed
[ +0.656177] kauditd_printk_skb: 11 callbacks suppressed
[ +5.521836] kauditd_printk_skb: 17 callbacks suppressed
[ +5.654502] kauditd_printk_skb: 38 callbacks suppressed
[ +5.127981] kauditd_printk_skb: 20 callbacks suppressed
[Dec16 04:28] kauditd_printk_skb: 192 callbacks suppressed
[ +1.977677] kauditd_printk_skb: 120 callbacks suppressed
[ +6.151932] kauditd_printk_skb: 95 callbacks suppressed
[ +5.780054] kauditd_printk_skb: 32 callbacks suppressed
[ +2.242811] kauditd_printk_skb: 47 callbacks suppressed
[ +10.774591] kauditd_printk_skb: 17 callbacks suppressed
[ +5.873439] kauditd_printk_skb: 22 callbacks suppressed
[ +6.022031] kauditd_printk_skb: 38 callbacks suppressed
[ +0.000028] kauditd_printk_skb: 57 callbacks suppressed
[Dec16 04:29] kauditd_printk_skb: 129 callbacks suppressed
[ +3.295583] kauditd_printk_skb: 173 callbacks suppressed
[ +1.849988] kauditd_printk_skb: 106 callbacks suppressed
[ +1.805160] kauditd_printk_skb: 96 callbacks suppressed
[ +0.000313] kauditd_printk_skb: 22 callbacks suppressed
[ +5.918307] kauditd_printk_skb: 41 callbacks suppressed
[ +7.727768] kauditd_printk_skb: 127 callbacks suppressed
[Dec16 04:31] kauditd_printk_skb: 10 callbacks suppressed
==> etcd [4a6f75243dd73b8f78264b726f85d4cefe141c6c8fb29f25f86e1e352c2302c5] <==
{"level":"warn","ts":"2025-12-16T04:27:50.304780Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-16T04:27:49.917199Z","time spent":"387.576146ms","remote":"127.0.0.1:60342","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":0,"response size":28,"request content":"key:\"/registry/apiregistration.k8s.io/apiservices\" limit:1 "}
{"level":"warn","ts":"2025-12-16T04:27:50.304880Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"457.745222ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-16T04:27:50.304969Z","caller":"traceutil/trace.go:172","msg":"trace[1830167884] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1019; }","duration":"457.834126ms","start":"2025-12-16T04:27:49.847130Z","end":"2025-12-16T04:27:50.304964Z","steps":["trace[1830167884] 'agreement among raft nodes before linearized reading' (duration: 457.735529ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-16T04:27:50.305075Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-16T04:27:49.847113Z","time spent":"457.954785ms","remote":"127.0.0.1:59592","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
{"level":"info","ts":"2025-12-16T04:27:50.306684Z","caller":"traceutil/trace.go:172","msg":"trace[1705315545] transaction","detail":"{read_only:false; response_revision:1020; number_of_response:1; }","duration":"217.426034ms","start":"2025-12-16T04:27:50.089250Z","end":"2025-12-16T04:27:50.306676Z","steps":["trace[1705315545] 'process raft request' (duration: 215.227705ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-16T04:27:55.458732Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.663759ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-16T04:27:55.458811Z","caller":"traceutil/trace.go:172","msg":"trace[433126633] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1031; }","duration":"119.792212ms","start":"2025-12-16T04:27:55.339002Z","end":"2025-12-16T04:27:55.458794Z","steps":["trace[433126633] 'agreement among raft nodes before linearized reading' (duration: 68.578013ms)","trace[433126633] 'range keys from in-memory index tree' (duration: 51.096611ms)"],"step_count":2}
{"level":"warn","ts":"2025-12-16T04:27:55.458842Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.019484ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-16T04:27:55.458911Z","caller":"traceutil/trace.go:172","msg":"trace[848838298] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:1031; }","duration":"111.109059ms","start":"2025-12-16T04:27:55.347792Z","end":"2025-12-16T04:27:55.458901Z","steps":["trace[848838298] 'agreement among raft nodes before linearized reading' (duration: 110.958009ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-16T04:27:55.459421Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.182506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-16T04:27:55.459517Z","caller":"traceutil/trace.go:172","msg":"trace[2135363795] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1031; }","duration":"113.282108ms","start":"2025-12-16T04:27:55.346228Z","end":"2025-12-16T04:27:55.459510Z","steps":["trace[2135363795] 'agreement among raft nodes before linearized reading' (duration: 113.163666ms)"],"step_count":1}
{"level":"info","ts":"2025-12-16T04:28:03.663490Z","caller":"traceutil/trace.go:172","msg":"trace[1825020145] transaction","detail":"{read_only:false; response_revision:1096; number_of_response:1; }","duration":"127.506566ms","start":"2025-12-16T04:28:03.535965Z","end":"2025-12-16T04:28:03.663471Z","steps":["trace[1825020145] 'process raft request' (duration: 127.378292ms)"],"step_count":1}
{"level":"info","ts":"2025-12-16T04:28:10.031424Z","caller":"traceutil/trace.go:172","msg":"trace[115661521] linearizableReadLoop","detail":"{readStateIndex:1174; appliedIndex:1174; }","duration":"215.26937ms","start":"2025-12-16T04:28:09.816140Z","end":"2025-12-16T04:28:10.031409Z","steps":["trace[115661521] 'read index received' (duration: 215.26508ms)","trace[115661521] 'applied index is now lower than readState.Index' (duration: 3.82µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-16T04:28:10.032670Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"216.51431ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-16T04:28:10.032699Z","caller":"traceutil/trace.go:172","msg":"trace[1957910568] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1144; }","duration":"216.554598ms","start":"2025-12-16T04:28:09.816137Z","end":"2025-12-16T04:28:10.032692Z","steps":["trace[1957910568] 'agreement among raft nodes before linearized reading' (duration: 216.483054ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-16T04:28:10.032974Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"186.482559ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-16T04:28:10.033025Z","caller":"traceutil/trace.go:172","msg":"trace[1037108114] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1144; }","duration":"186.539408ms","start":"2025-12-16T04:28:09.846479Z","end":"2025-12-16T04:28:10.033018Z","steps":["trace[1037108114] 'agreement among raft nodes before linearized reading' (duration: 186.46712ms)"],"step_count":1}
{"level":"info","ts":"2025-12-16T04:28:19.771188Z","caller":"traceutil/trace.go:172","msg":"trace[1729950387] transaction","detail":"{read_only:false; response_revision:1176; number_of_response:1; }","duration":"101.410828ms","start":"2025-12-16T04:28:19.669721Z","end":"2025-12-16T04:28:19.771132Z","steps":["trace[1729950387] 'process raft request' (duration: 100.42485ms)"],"step_count":1}
{"level":"info","ts":"2025-12-16T04:28:51.603726Z","caller":"traceutil/trace.go:172","msg":"trace[1753626828] transaction","detail":"{read_only:false; response_revision:1353; number_of_response:1; }","duration":"112.097375ms","start":"2025-12-16T04:28:51.491604Z","end":"2025-12-16T04:28:51.603702Z","steps":["trace[1753626828] 'process raft request' (duration: 111.747327ms)"],"step_count":1}
{"level":"info","ts":"2025-12-16T04:28:53.229810Z","caller":"traceutil/trace.go:172","msg":"trace[518491461] transaction","detail":"{read_only:false; response_revision:1355; number_of_response:1; }","duration":"381.117813ms","start":"2025-12-16T04:28:52.848675Z","end":"2025-12-16T04:28:53.229793Z","steps":["trace[518491461] 'process raft request' (duration: 381.000848ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-16T04:28:53.229982Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-16T04:28:52.848649Z","time spent":"381.223566ms","remote":"127.0.0.1:59714","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1347 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
{"level":"info","ts":"2025-12-16T04:28:53.749168Z","caller":"traceutil/trace.go:172","msg":"trace[938438409] linearizableReadLoop","detail":"{readStateIndex:1396; appliedIndex:1396; }","duration":"111.558845ms","start":"2025-12-16T04:28:53.637592Z","end":"2025-12-16T04:28:53.749151Z","steps":["trace[938438409] 'read index received' (duration: 111.552232ms)","trace[938438409] 'applied index is now lower than readState.Index' (duration: 5.756µs)"],"step_count":2}
{"level":"info","ts":"2025-12-16T04:28:53.749341Z","caller":"traceutil/trace.go:172","msg":"trace[1468599566] transaction","detail":"{read_only:false; response_revision:1356; number_of_response:1; }","duration":"125.597826ms","start":"2025-12-16T04:28:53.623732Z","end":"2025-12-16T04:28:53.749330Z","steps":["trace[1468599566] 'process raft request' (duration: 125.438909ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-16T04:28:53.749450Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.843075ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" limit:1 ","response":"range_response_count:1 size:3395"}
{"level":"info","ts":"2025-12-16T04:28:53.749475Z","caller":"traceutil/trace.go:172","msg":"trace[320375236] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:1; response_revision:1356; }","duration":"111.881768ms","start":"2025-12-16T04:28:53.637587Z","end":"2025-12-16T04:28:53.749469Z","steps":["trace[320375236] 'agreement among raft nodes before linearized reading' (duration: 111.740874ms)"],"step_count":1}
==> kernel <==
04:31:30 up 5 min, 0 users, load average: 1.12, 1.56, 0.79
Linux addons-153066 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 03:41:16 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [a668ea7a775ace807986d650432e9486841b1694cc7e3cea4aa90f9db74d4d26] <==
E1216 04:27:49.434708 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.195.173:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.195.173:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.195.173:443: connect: connection refused" logger="UnhandledError"
E1216 04:27:49.436875 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.195.173:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.195.173:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.195.173:443: connect: connection refused" logger="UnhandledError"
I1216 04:27:49.546766 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1216 04:28:38.287830 1 conn.go:339] Error on socket receive: read tcp 192.168.39.189:8443->192.168.39.1:51728: use of closed network connection
E1216 04:28:38.495772 1 conn.go:339] Error on socket receive: read tcp 192.168.39.189:8443->192.168.39.1:51758: use of closed network connection
I1216 04:28:47.578145 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.2.168"}
I1216 04:29:00.914370 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1216 04:29:01.119375 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.125.73"}
I1216 04:29:14.521583 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
E1216 04:29:35.757267 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
I1216 04:29:41.144669 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1216 04:29:41.144793 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1216 04:29:41.182929 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1216 04:29:41.183024 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1216 04:29:41.184690 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1216 04:29:41.184737 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1216 04:29:41.202954 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1216 04:29:41.203061 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1216 04:29:41.227427 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1216 04:29:41.227467 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1216 04:29:42.185950 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1216 04:29:42.230661 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1216 04:29:42.243389 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I1216 04:29:50.468152 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1216 04:31:29.005965 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.103.94"}
==> kube-controller-manager [3920403bed4db8081a62b794d286d69772e0b066f21464c4462b0e238f3c104f] <==
E1216 04:29:50.097648 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1216 04:29:52.156066 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1216 04:29:52.157421 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
I1216 04:29:55.151774 1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
I1216 04:29:55.151903 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1216 04:29:55.248853 1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
I1216 04:29:55.248949 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1216 04:29:56.722993 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1216 04:29:56.724076 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1216 04:29:59.574508 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1216 04:29:59.576550 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1216 04:30:00.356470 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1216 04:30:00.357576 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1216 04:30:11.613467 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1216 04:30:11.614787 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1216 04:30:16.180198 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1216 04:30:16.181149 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1216 04:30:22.796768 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1216 04:30:22.797917 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1216 04:30:48.847184 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1216 04:30:48.848265 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1216 04:30:59.455199 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1216 04:30:59.456356 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1216 04:31:12.121619 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1216 04:31:12.122886 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [b2da8aa9cf49d54a771b62b0b36d4dcca12b05afaa1ae334b8ddc6f491c8d26a] <==
I1216 04:26:57.660859 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1216 04:26:57.863491 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1216 04:26:57.867975 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.189"]
E1216 04:26:57.879729 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1216 04:26:58.160977 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1216 04:26:58.161259 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1216 04:26:58.161473 1 server_linux.go:132] "Using iptables Proxier"
I1216 04:26:58.193423 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1216 04:26:58.193761 1 server.go:527] "Version info" version="v1.34.2"
I1216 04:26:58.193773 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1216 04:26:58.199379 1 config.go:106] "Starting endpoint slice config controller"
I1216 04:26:58.199394 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1216 04:26:58.200491 1 config.go:403] "Starting serviceCIDR config controller"
I1216 04:26:58.200499 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1216 04:26:58.201173 1 config.go:309] "Starting node config controller"
I1216 04:26:58.201179 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1216 04:26:58.201184 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1216 04:26:58.206919 1 config.go:200] "Starting service config controller"
I1216 04:26:58.207589 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1216 04:26:58.300660 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1216 04:26:58.300716 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1216 04:26:58.308028 1 shared_informer.go:356] "Caches are synced" controller="service config"
==> kube-scheduler [596fad690cd4ff248141fb728834feee5769f975593c16b2c7310569225b0a05] <==
E1216 04:26:48.053426 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1216 04:26:48.053506 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1216 04:26:48.053617 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
I1216 04:26:48.044233 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
E1216 04:26:48.053940 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1216 04:26:48.055058 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1216 04:26:48.055168 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1216 04:26:48.055396 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1216 04:26:48.056021 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1216 04:26:48.057613 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1216 04:26:48.880473 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1216 04:26:48.902919 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1216 04:26:48.910689 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1216 04:26:48.947658 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1216 04:26:49.043536 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1216 04:26:49.086723 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1216 04:26:49.120559 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1216 04:26:49.145911 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1216 04:26:49.212837 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1216 04:26:49.250007 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1216 04:26:49.271786 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1216 04:26:49.359456 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1216 04:26:49.365672 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1216 04:26:49.375336 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
I1216 04:26:52.545660 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Dec 16 04:29:54 addons-153066 kubelet[1511]: I1216 04:29:54.087212 1511 scope.go:117] "RemoveContainer" containerID="420853799a3443a976842aa6505c894cef77bae1c6a3f9f045b830f405f607c9"
Dec 16 04:29:54 addons-153066 kubelet[1511]: I1216 04:29:54.206077 1511 scope.go:117] "RemoveContainer" containerID="bb8f65c76e45e9f6f722d1fa821cd3b4655159f24d8140f9c3af0a1ab68b5dff"
Dec 16 04:29:54 addons-153066 kubelet[1511]: I1216 04:29:54.330153 1511 scope.go:117] "RemoveContainer" containerID="c8d31a8f6f3088e13d66b5bec43b0837f24d42a9805bb334ecc3af167f52fbcd"
Dec 16 04:30:00 addons-153066 kubelet[1511]: I1216 04:30:00.597042 1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Dec 16 04:30:01 addons-153066 kubelet[1511]: E1216 04:30:01.081720 1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765859401081116170 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 04:30:01 addons-153066 kubelet[1511]: E1216 04:30:01.081747 1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765859401081116170 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 04:30:11 addons-153066 kubelet[1511]: E1216 04:30:11.086930 1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765859411085340839 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 04:30:11 addons-153066 kubelet[1511]: E1216 04:30:11.086974 1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765859411085340839 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 04:30:21 addons-153066 kubelet[1511]: E1216 04:30:21.089801 1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765859421089419175 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 04:30:21 addons-153066 kubelet[1511]: E1216 04:30:21.089844 1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765859421089419175 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 04:30:31 addons-153066 kubelet[1511]: E1216 04:30:31.095747 1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765859431094081659 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 04:30:31 addons-153066 kubelet[1511]: E1216 04:30:31.095775 1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765859431094081659 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 04:30:41 addons-153066 kubelet[1511]: E1216 04:30:41.099444 1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765859441098557380 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 04:30:41 addons-153066 kubelet[1511]: E1216 04:30:41.099470 1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765859441098557380 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 04:30:51 addons-153066 kubelet[1511]: E1216 04:30:51.102148 1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765859451101625750 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 04:30:51 addons-153066 kubelet[1511]: E1216 04:30:51.102553 1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765859451101625750 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 04:31:01 addons-153066 kubelet[1511]: E1216 04:31:01.106088 1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765859461105764696 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 04:31:01 addons-153066 kubelet[1511]: E1216 04:31:01.106108 1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765859461105764696 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 04:31:07 addons-153066 kubelet[1511]: I1216 04:31:07.596602 1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Dec 16 04:31:09 addons-153066 kubelet[1511]: I1216 04:31:09.596058 1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-hhs5c" secret="" err="secret \"gcp-auth\" not found"
Dec 16 04:31:11 addons-153066 kubelet[1511]: E1216 04:31:11.109675 1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765859471109253167 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 04:31:11 addons-153066 kubelet[1511]: E1216 04:31:11.109704 1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765859471109253167 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 04:31:21 addons-153066 kubelet[1511]: E1216 04:31:21.114013 1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765859481112858006 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 04:31:21 addons-153066 kubelet[1511]: E1216 04:31:21.114400 1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765859481112858006 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 04:31:29 addons-153066 kubelet[1511]: I1216 04:31:29.042728 1511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmt6w\" (UniqueName: \"kubernetes.io/projected/a2654019-ae9a-44ed-ba5e-6eea0488c198-kube-api-access-fmt6w\") pod \"hello-world-app-5d498dc89-7bj4k\" (UID: \"a2654019-ae9a-44ed-ba5e-6eea0488c198\") " pod="default/hello-world-app-5d498dc89-7bj4k"
==> storage-provisioner [d065f232053c70c374d73496a42830ab0ba8afe9511c424efc1c7b52d7024ab4] <==
W1216 04:31:06.499107 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:08.503395 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:08.508110 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:10.512767 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:10.517918 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:12.521816 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:12.529563 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:14.533480 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:14.538641 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:16.542813 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:16.548147 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:18.552008 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:18.557497 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:20.561718 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:20.569163 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:22.572877 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:22.578627 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:24.582181 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:24.589586 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:26.593872 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:26.602611 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:28.606256 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:28.613739 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:30.619451 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 04:31:30.627170 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-153066 -n addons-153066
helpers_test.go:270: (dbg) Run: kubectl --context addons-153066 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-7bj4k ingress-nginx-admission-create-7tk55 ingress-nginx-admission-patch-cjxvw
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run: kubectl --context addons-153066 describe pod hello-world-app-5d498dc89-7bj4k ingress-nginx-admission-create-7tk55 ingress-nginx-admission-patch-cjxvw
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-153066 describe pod hello-world-app-5d498dc89-7bj4k ingress-nginx-admission-create-7tk55 ingress-nginx-admission-patch-cjxvw: exit status 1 (78.566856ms)
-- stdout --
Name: hello-world-app-5d498dc89-7bj4k
Namespace: default
Priority: 0
Service Account: default
Node: addons-153066/192.168.39.189
Start Time: Tue, 16 Dec 2025 04:31:28 +0000
Labels: app=hello-world-app
pod-template-hash=5d498dc89
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-5d498dc89
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fmt6w (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-fmt6w:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3s default-scheduler Successfully assigned default/hello-world-app-5d498dc89-7bj4k to addons-153066
Normal Pulling 2s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-7tk55" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-cjxvw" not found
** /stderr **
helpers_test.go:288: kubectl --context addons-153066 describe pod hello-world-app-5d498dc89-7bj4k ingress-nginx-admission-create-7tk55 ingress-nginx-admission-patch-cjxvw: exit status 1
addons_test.go:1055: (dbg) Run: out/minikube-linux-amd64 -p addons-153066 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run: out/minikube-linux-amd64 -p addons-153066 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-153066 addons disable ingress --alsologtostderr -v=1: (7.756327603s)
--- FAIL: TestAddons/parallel/Ingress (159.21s)