=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run: kubectl --context addons-262069 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run: kubectl --context addons-262069 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run: kubectl --context addons-262069 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [2d05a1d3-b173-402d-b417-d11ed3f1e38b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [2d05a1d3-b173-402d-b417-d11ed3f1e38b] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.007972185s
I1217 00:09:23.414929 17074 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run: out/minikube-linux-amd64 -p addons-262069 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-262069 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.225273339s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run: kubectl --context addons-262069 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run: out/minikube-linux-amd64 -p addons-262069 ip
addons_test.go:301: (dbg) Run: nslookup hello-john.test 192.168.39.183
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-262069 -n addons-262069
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-amd64 -p addons-262069 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-262069 logs -n 25: (1.500623001s)
helpers_test.go:261: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-330283 │ download-only-330283 │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │ 17 Dec 25 00:06 UTC │
│ start │ --download-only -p binary-mirror-467623 --alsologtostderr --binary-mirror http://127.0.0.1:43951 --driver=kvm2 --container-runtime=crio │ binary-mirror-467623 │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │ │
│ delete │ -p binary-mirror-467623 │ binary-mirror-467623 │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │ 17 Dec 25 00:06 UTC │
│ addons │ disable dashboard -p addons-262069 │ addons-262069 │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │ │
│ addons │ enable dashboard -p addons-262069 │ addons-262069 │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │ │
│ start │ -p addons-262069 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-262069 │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │ 17 Dec 25 00:08 UTC │
│ addons │ addons-262069 addons disable volcano --alsologtostderr -v=1 │ addons-262069 │ jenkins │ v1.37.0 │ 17 Dec 25 00:08 UTC │ 17 Dec 25 00:08 UTC │
│ addons │ addons-262069 addons disable gcp-auth --alsologtostderr -v=1 │ addons-262069 │ jenkins │ v1.37.0 │ 17 Dec 25 00:08 UTC │ 17 Dec 25 00:09 UTC │
│ addons │ enable headlamp -p addons-262069 --alsologtostderr -v=1 │ addons-262069 │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
│ addons │ addons-262069 addons disable metrics-server --alsologtostderr -v=1 │ addons-262069 │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
│ addons │ addons-262069 addons disable headlamp --alsologtostderr -v=1 │ addons-262069 │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
│ ip │ addons-262069 ip │ addons-262069 │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
│ addons │ addons-262069 addons disable registry --alsologtostderr -v=1 │ addons-262069 │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-262069 │ addons-262069 │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
│ addons │ addons-262069 addons disable registry-creds --alsologtostderr -v=1 │ addons-262069 │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
│ ssh │ addons-262069 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-262069 │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ │
│ addons │ addons-262069 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-262069 │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
│ ssh │ addons-262069 ssh cat /opt/local-path-provisioner/pvc-3eafbabf-bda1-4678-87d0-9af3d5bc37b7_default_test-pvc/file1 │ addons-262069 │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
│ addons │ addons-262069 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-262069 │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
│ addons │ addons-262069 addons disable yakd --alsologtostderr -v=1 │ addons-262069 │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
│ addons │ addons-262069 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-262069 │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
│ addons │ addons-262069 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-262069 │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
│ addons │ addons-262069 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-262069 │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
│ addons │ addons-262069 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-262069 │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
│ ip │ addons-262069 ip │ addons-262069 │ jenkins │ v1.37.0 │ 17 Dec 25 00:11 UTC │ 17 Dec 25 00:11 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/17 00:06:29
Running on machine: ubuntu-20-agent-13
Binary: Built with gc go1.25.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1217 00:06:29.344840 17911 out.go:360] Setting OutFile to fd 1 ...
I1217 00:06:29.345113 17911 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:06:29.345122 17911 out.go:374] Setting ErrFile to fd 2...
I1217 00:06:29.345127 17911 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:06:29.345317 17911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
I1217 00:06:29.345802 17911 out.go:368] Setting JSON to false
I1217 00:06:29.346677 17911 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2935,"bootTime":1765927054,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1217 00:06:29.346729 17911 start.go:143] virtualization: kvm guest
I1217 00:06:29.348924 17911 out.go:179] * [addons-262069] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1217 00:06:29.350295 17911 out.go:179] - MINIKUBE_LOCATION=22168
I1217 00:06:29.350312 17911 notify.go:221] Checking for updates...
I1217 00:06:29.353771 17911 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1217 00:06:29.355236 17911 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
I1217 00:06:29.356587 17911 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
I1217 00:06:29.357980 17911 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1217 00:06:29.359290 17911 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1217 00:06:29.360868 17911 driver.go:422] Setting default libvirt URI to qemu:///system
I1217 00:06:29.391560 17911 out.go:179] * Using the kvm2 driver based on user configuration
I1217 00:06:29.392842 17911 start.go:309] selected driver: kvm2
I1217 00:06:29.392855 17911 start.go:927] validating driver "kvm2" against <nil>
I1217 00:06:29.392864 17911 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1217 00:06:29.393596 17911 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1217 00:06:29.393822 17911 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1217 00:06:29.393862 17911 cni.go:84] Creating CNI manager for ""
I1217 00:06:29.393905 17911 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1217 00:06:29.393913 17911 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1217 00:06:29.393959 17911 start.go:353] cluster config:
{Name:addons-262069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-262069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1217 00:06:29.394078 17911 iso.go:125] acquiring lock: {Name:mk94a221d1243bc618ab687e91468d7a3f9fe960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1217 00:06:29.395574 17911 out.go:179] * Starting "addons-262069" primary control-plane node in "addons-262069" cluster
I1217 00:06:29.396649 17911 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1217 00:06:29.396683 17911 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
I1217 00:06:29.396691 17911 cache.go:65] Caching tarball of preloaded images
I1217 00:06:29.396778 17911 preload.go:238] Found /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1217 00:06:29.396793 17911 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
I1217 00:06:29.397086 17911 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/config.json ...
I1217 00:06:29.397108 17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/config.json: {Name:mke599731771ab4633d490c64f121491f04633f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:06:29.397272 17911 start.go:360] acquireMachinesLock for addons-262069: {Name:mke100036b6b648b2e8844ce094d9d979b4c8eb4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1217 00:06:29.397337 17911 start.go:364] duration metric: took 47.711µs to acquireMachinesLock for "addons-262069"
I1217 00:06:29.397360 17911 start.go:93] Provisioning new machine with config: &{Name:addons-262069 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-262069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1217 00:06:29.397425 17911 start.go:125] createHost starting for "" (driver="kvm2")
I1217 00:06:29.399106 17911 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1217 00:06:29.399260 17911 start.go:159] libmachine.API.Create for "addons-262069" (driver="kvm2")
I1217 00:06:29.399287 17911 client.go:173] LocalClient.Create starting
I1217 00:06:29.399403 17911 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem
I1217 00:06:29.423361 17911 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem
I1217 00:06:29.547071 17911 main.go:143] libmachine: creating domain...
I1217 00:06:29.547091 17911 main.go:143] libmachine: creating network...
I1217 00:06:29.548549 17911 main.go:143] libmachine: found existing default network
I1217 00:06:29.548795 17911 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1217 00:06:29.549398 17911 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b737b0}
I1217 00:06:29.549515 17911 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-262069</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1217 00:06:29.555796 17911 main.go:143] libmachine: creating private network mk-addons-262069 192.168.39.0/24...
I1217 00:06:29.625854 17911 main.go:143] libmachine: private network mk-addons-262069 192.168.39.0/24 created
I1217 00:06:29.626160 17911 main.go:143] libmachine: <network>
<name>mk-addons-262069</name>
<uuid>e703ee39-5ac4-4765-b8b5-6f6ef651ada0</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:2c:cd:ea'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1217 00:06:29.626197 17911 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069 ...
I1217 00:06:29.626231 17911 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22168-12839/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso
I1217 00:06:29.626257 17911 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22168-12839/.minikube
I1217 00:06:29.626324 17911 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22168-12839/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22168-12839/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso...
I1217 00:06:29.887825 17911 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa...
I1217 00:06:30.001145 17911 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/addons-262069.rawdisk...
I1217 00:06:30.001193 17911 main.go:143] libmachine: Writing magic tar header
I1217 00:06:30.001217 17911 main.go:143] libmachine: Writing SSH key tar header
I1217 00:06:30.001335 17911 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069 ...
I1217 00:06:30.001427 17911 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069
I1217 00:06:30.001455 17911 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069 (perms=drwx------)
I1217 00:06:30.001475 17911 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22168-12839/.minikube/machines
I1217 00:06:30.001501 17911 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22168-12839/.minikube/machines (perms=drwxr-xr-x)
I1217 00:06:30.001527 17911 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22168-12839/.minikube
I1217 00:06:30.001541 17911 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22168-12839/.minikube (perms=drwxr-xr-x)
I1217 00:06:30.001558 17911 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22168-12839
I1217 00:06:30.001576 17911 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22168-12839 (perms=drwxrwxr-x)
I1217 00:06:30.001594 17911 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1217 00:06:30.001609 17911 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1217 00:06:30.001625 17911 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1217 00:06:30.001644 17911 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1217 00:06:30.001661 17911 main.go:143] libmachine: checking permissions on dir: /home
I1217 00:06:30.001674 17911 main.go:143] libmachine: skipping /home - not owner
I1217 00:06:30.001680 17911 main.go:143] libmachine: defining domain...
I1217 00:06:30.002877 17911 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-262069</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/addons-262069.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-262069'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1217 00:06:30.011557 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:d9:3f:b2 in network default
I1217 00:06:30.012356 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:30.012375 17911 main.go:143] libmachine: starting domain...
I1217 00:06:30.012380 17911 main.go:143] libmachine: ensuring networks are active...
I1217 00:06:30.013245 17911 main.go:143] libmachine: Ensuring network default is active
I1217 00:06:30.013715 17911 main.go:143] libmachine: Ensuring network mk-addons-262069 is active
I1217 00:06:30.014461 17911 main.go:143] libmachine: getting domain XML...
I1217 00:06:30.015650 17911 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-262069</name>
<uuid>c11e3475-a333-4013-be6a-553f88d11a60</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/addons-262069.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:78:11:d8'/>
<source network='mk-addons-262069'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:d9:3f:b2'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1217 00:06:31.345463 17911 main.go:143] libmachine: waiting for domain to start...
I1217 00:06:31.346813 17911 main.go:143] libmachine: domain is now running
I1217 00:06:31.346829 17911 main.go:143] libmachine: waiting for IP...
I1217 00:06:31.347578 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:31.348170 17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
I1217 00:06:31.348186 17911 main.go:143] libmachine: trying to list again with source=arp
I1217 00:06:31.348482 17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
I1217 00:06:31.348519 17911 retry.go:31] will retry after 237.694409ms: waiting for domain to come up
I1217 00:06:31.588159 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:31.588772 17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
I1217 00:06:31.588793 17911 main.go:143] libmachine: trying to list again with source=arp
I1217 00:06:31.589225 17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
I1217 00:06:31.589269 17911 retry.go:31] will retry after 332.822233ms: waiting for domain to come up
I1217 00:06:31.924041 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:31.924709 17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
I1217 00:06:31.924728 17911 main.go:143] libmachine: trying to list again with source=arp
I1217 00:06:31.925115 17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
I1217 00:06:31.925203 17911 retry.go:31] will retry after 351.790303ms: waiting for domain to come up
I1217 00:06:32.279053 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:32.279624 17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
I1217 00:06:32.279651 17911 main.go:143] libmachine: trying to list again with source=arp
I1217 00:06:32.280061 17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
I1217 00:06:32.280099 17911 retry.go:31] will retry after 427.603217ms: waiting for domain to come up
I1217 00:06:32.709895 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:32.710435 17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
I1217 00:06:32.710451 17911 main.go:143] libmachine: trying to list again with source=arp
I1217 00:06:32.710775 17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
I1217 00:06:32.710809 17911 retry.go:31] will retry after 686.480041ms: waiting for domain to come up
I1217 00:06:33.398668 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:33.399225 17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
I1217 00:06:33.399244 17911 main.go:143] libmachine: trying to list again with source=arp
I1217 00:06:33.399552 17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
I1217 00:06:33.399588 17911 retry.go:31] will retry after 794.514614ms: waiting for domain to come up
I1217 00:06:34.195475 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:34.196071 17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
I1217 00:06:34.196087 17911 main.go:143] libmachine: trying to list again with source=arp
I1217 00:06:34.196358 17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
I1217 00:06:34.196391 17911 retry.go:31] will retry after 1.179105994s: waiting for domain to come up
I1217 00:06:35.377134 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:35.377747 17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
I1217 00:06:35.377766 17911 main.go:143] libmachine: trying to list again with source=arp
I1217 00:06:35.378115 17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
I1217 00:06:35.378165 17911 retry.go:31] will retry after 1.065984921s: waiting for domain to come up
I1217 00:06:36.445627 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:36.446286 17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
I1217 00:06:36.446306 17911 main.go:143] libmachine: trying to list again with source=arp
I1217 00:06:36.446612 17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
I1217 00:06:36.446650 17911 retry.go:31] will retry after 1.365834942s: waiting for domain to come up
I1217 00:06:37.814074 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:37.814577 17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
I1217 00:06:37.814591 17911 main.go:143] libmachine: trying to list again with source=arp
I1217 00:06:37.814876 17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
I1217 00:06:37.814907 17911 retry.go:31] will retry after 1.648841511s: waiting for domain to come up
I1217 00:06:39.465655 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:39.466372 17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
I1217 00:06:39.466394 17911 main.go:143] libmachine: trying to list again with source=arp
I1217 00:06:39.466758 17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
I1217 00:06:39.466801 17911 retry.go:31] will retry after 2.17642133s: waiting for domain to come up
I1217 00:06:41.646499 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:41.647063 17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
I1217 00:06:41.647078 17911 main.go:143] libmachine: trying to list again with source=arp
I1217 00:06:41.647353 17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
I1217 00:06:41.647399 17911 retry.go:31] will retry after 3.466079888s: waiting for domain to come up
I1217 00:06:45.114939 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:45.115377 17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
I1217 00:06:45.115392 17911 main.go:143] libmachine: trying to list again with source=arp
I1217 00:06:45.115637 17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
I1217 00:06:45.115666 17911 retry.go:31] will retry after 4.185434258s: waiting for domain to come up
I1217 00:06:49.306253 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:49.306945 17911 main.go:143] libmachine: domain addons-262069 has current primary IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:49.306968 17911 main.go:143] libmachine: found domain IP: 192.168.39.183
I1217 00:06:49.306978 17911 main.go:143] libmachine: reserving static IP address...
I1217 00:06:49.307503 17911 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-262069", mac: "52:54:00:78:11:d8", ip: "192.168.39.183"} in network mk-addons-262069
I1217 00:06:49.579728 17911 main.go:143] libmachine: reserved static IP address 192.168.39.183 for domain addons-262069
I1217 00:06:49.579756 17911 main.go:143] libmachine: waiting for SSH...
I1217 00:06:49.579764 17911 main.go:143] libmachine: Getting to WaitForSSH function...
I1217 00:06:49.583518 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:49.584088 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:minikube Clientid:01:52:54:00:78:11:d8}
I1217 00:06:49.584136 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:49.584399 17911 main.go:143] libmachine: Using SSH client type: native
I1217 00:06:49.584694 17911 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.183 22 <nil> <nil>}
I1217 00:06:49.584707 17911 main.go:143] libmachine: About to run SSH command:
exit 0
I1217 00:06:49.694335 17911 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1217 00:06:49.694804 17911 main.go:143] libmachine: domain creation complete
I1217 00:06:49.696808 17911 machine.go:94] provisionDockerMachine start ...
I1217 00:06:49.699690 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:49.700207 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:minikube Clientid:01:52:54:00:78:11:d8}
I1217 00:06:49.700257 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:49.700484 17911 main.go:143] libmachine: Using SSH client type: native
I1217 00:06:49.700717 17911 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.183 22 <nil> <nil>}
I1217 00:06:49.700731 17911 main.go:143] libmachine: About to run SSH command:
hostname
I1217 00:06:49.813425 17911 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1217 00:06:49.813465 17911 buildroot.go:166] provisioning hostname "addons-262069"
I1217 00:06:49.816821 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:49.817335 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:06:49.817363 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:49.817561 17911 main.go:143] libmachine: Using SSH client type: native
I1217 00:06:49.817743 17911 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.183 22 <nil> <nil>}
I1217 00:06:49.817755 17911 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-262069 && echo "addons-262069" | sudo tee /etc/hostname
I1217 00:06:49.943763 17911 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-262069
I1217 00:06:49.946937 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:49.947468 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:06:49.947503 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:49.947715 17911 main.go:143] libmachine: Using SSH client type: native
I1217 00:06:49.948009 17911 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.183 22 <nil> <nil>}
I1217 00:06:49.948047 17911 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-262069' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-262069/g' /etc/hosts;
else
echo '127.0.1.1 addons-262069' | sudo tee -a /etc/hosts;
fi
fi
I1217 00:06:50.066107 17911 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1217 00:06:50.066143 17911 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12839/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12839/.minikube}
I1217 00:06:50.066191 17911 buildroot.go:174] setting up certificates
I1217 00:06:50.066209 17911 provision.go:84] configureAuth start
I1217 00:06:50.069525 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:50.070099 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:06:50.070138 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:50.073351 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:50.073864 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:06:50.073902 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:50.074158 17911 provision.go:143] copyHostCerts
I1217 00:06:50.074249 17911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/ca.pem (1078 bytes)
I1217 00:06:50.074434 17911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/cert.pem (1123 bytes)
I1217 00:06:50.074576 17911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/key.pem (1679 bytes)
I1217 00:06:50.074679 17911 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem org=jenkins.addons-262069 san=[127.0.0.1 192.168.39.183 addons-262069 localhost minikube]
I1217 00:06:50.162585 17911 provision.go:177] copyRemoteCerts
I1217 00:06:50.162655 17911 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1217 00:06:50.165053 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:50.165463 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:06:50.165485 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:50.165610 17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
I1217 00:06:50.253682 17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1217 00:06:50.288484 17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1217 00:06:50.322785 17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1217 00:06:50.357887 17911 provision.go:87] duration metric: took 291.645642ms to configureAuth
I1217 00:06:50.357911 17911 buildroot.go:189] setting minikube options for container-runtime
I1217 00:06:50.358145 17911 config.go:182] Loaded profile config "addons-262069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:06:50.361101 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:50.361524 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:06:50.361558 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:50.361818 17911 main.go:143] libmachine: Using SSH client type: native
I1217 00:06:50.362047 17911 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.183 22 <nil> <nil>}
I1217 00:06:50.362070 17911 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1217 00:06:50.753241 17911 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1217 00:06:50.753268 17911 machine.go:97] duration metric: took 1.056439173s to provisionDockerMachine
I1217 00:06:50.753277 17911 client.go:176] duration metric: took 21.353980905s to LocalClient.Create
I1217 00:06:50.753296 17911 start.go:167] duration metric: took 21.354040963s to libmachine.API.Create "addons-262069"
I1217 00:06:50.753305 17911 start.go:293] postStartSetup for "addons-262069" (driver="kvm2")
I1217 00:06:50.753317 17911 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1217 00:06:50.753375 17911 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1217 00:06:50.756514 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:50.756986 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:06:50.757046 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:50.757300 17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
I1217 00:06:50.843163 17911 ssh_runner.go:195] Run: cat /etc/os-release
I1217 00:06:50.848946 17911 info.go:137] Remote host: Buildroot 2025.02
I1217 00:06:50.848974 17911 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12839/.minikube/addons for local assets ...
I1217 00:06:50.849048 17911 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12839/.minikube/files for local assets ...
I1217 00:06:50.849086 17911 start.go:296] duration metric: took 95.774347ms for postStartSetup
I1217 00:06:50.880171 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:50.880746 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:06:50.880780 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:50.881106 17911 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/config.json ...
I1217 00:06:50.881386 17911 start.go:128] duration metric: took 21.48394966s to createHost
I1217 00:06:50.884160 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:50.884614 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:06:50.884673 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:50.884845 17911 main.go:143] libmachine: Using SSH client type: native
I1217 00:06:50.885173 17911 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.183 22 <nil> <nil>}
I1217 00:06:50.885193 17911 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1217 00:06:50.992119 17911 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765930010.958390996
I1217 00:06:50.992204 17911 fix.go:216] guest clock: 1765930010.958390996
I1217 00:06:50.992214 17911 fix.go:229] Guest: 2025-12-17 00:06:50.958390996 +0000 UTC Remote: 2025-12-17 00:06:50.881409729 +0000 UTC m=+21.584032290 (delta=76.981267ms)
I1217 00:06:50.992238 17911 fix.go:200] guest clock delta is within tolerance: 76.981267ms
I1217 00:06:50.992245 17911 start.go:83] releasing machines lock for "addons-262069", held for 21.594895966s
I1217 00:06:50.995881 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:50.996398 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:06:50.996426 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:50.997036 17911 ssh_runner.go:195] Run: cat /version.json
I1217 00:06:50.997111 17911 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1217 00:06:51.000174 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:51.000341 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:51.000627 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:06:51.000653 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:51.000719 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:06:51.000748 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:51.000810 17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
I1217 00:06:51.001051 17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
I1217 00:06:51.077380 17911 ssh_runner.go:195] Run: systemctl --version
I1217 00:06:51.106720 17911 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1217 00:06:51.688135 17911 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1217 00:06:51.696822 17911 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1217 00:06:51.696893 17911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1217 00:06:51.719872 17911 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1217 00:06:51.719899 17911 start.go:496] detecting cgroup driver to use...
I1217 00:06:51.719963 17911 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1217 00:06:51.746757 17911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1217 00:06:51.766895 17911 docker.go:218] disabling cri-docker service (if available) ...
I1217 00:06:51.766964 17911 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1217 00:06:51.786707 17911 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1217 00:06:51.808162 17911 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1217 00:06:51.964974 17911 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1217 00:06:52.182834 17911 docker.go:234] disabling docker service ...
I1217 00:06:52.182901 17911 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1217 00:06:52.200724 17911 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1217 00:06:52.217612 17911 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1217 00:06:52.389096 17911 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1217 00:06:52.539146 17911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1217 00:06:52.556703 17911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1217 00:06:52.582599 17911 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1217 00:06:52.582692 17911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1217 00:06:52.596725 17911 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1217 00:06:52.596797 17911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1217 00:06:52.611153 17911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1217 00:06:52.625661 17911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1217 00:06:52.640879 17911 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1217 00:06:52.656041 17911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1217 00:06:52.669426 17911 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1217 00:06:52.692636 17911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1217 00:06:52.708891 17911 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1217 00:06:52.721811 17911 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1217 00:06:52.721875 17911 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1217 00:06:52.747842 17911 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1217 00:06:52.761648 17911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 00:06:52.911574 17911 ssh_runner.go:195] Run: sudo systemctl restart crio
I1217 00:06:53.142300 17911 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1217 00:06:53.142419 17911 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1217 00:06:53.148213 17911 start.go:564] Will wait 60s for crictl version
I1217 00:06:53.148293 17911 ssh_runner.go:195] Run: which crictl
I1217 00:06:53.152721 17911 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1217 00:06:53.189608 17911 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1217 00:06:53.189754 17911 ssh_runner.go:195] Run: crio --version
I1217 00:06:53.219996 17911 ssh_runner.go:195] Run: crio --version
I1217 00:06:53.305579 17911 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
I1217 00:06:53.317279 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:53.317802 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:06:53.317834 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:06:53.318076 17911 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1217 00:06:53.323499 17911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1217 00:06:53.340386 17911 kubeadm.go:884] updating cluster {Name:addons-262069 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-262069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1217 00:06:53.340527 17911 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1217 00:06:53.340578 17911 ssh_runner.go:195] Run: sudo crictl images --output json
I1217 00:06:53.373645 17911 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
I1217 00:06:53.373735 17911 ssh_runner.go:195] Run: which lz4
I1217 00:06:53.378763 17911 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1217 00:06:53.384417 17911 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1217 00:06:53.384458 17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
I1217 00:06:54.714134 17911 crio.go:462] duration metric: took 1.335442713s to copy over tarball
I1217 00:06:54.714264 17911 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1217 00:06:56.278914 17911 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.564599763s)
I1217 00:06:56.278956 17911 crio.go:469] duration metric: took 1.564785516s to extract the tarball
I1217 00:06:56.278963 17911 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1217 00:06:56.317367 17911 ssh_runner.go:195] Run: sudo crictl images --output json
I1217 00:06:56.359563 17911 crio.go:514] all images are preloaded for cri-o runtime.
I1217 00:06:56.359590 17911 cache_images.go:86] Images are preloaded, skipping loading
I1217 00:06:56.360108 17911 kubeadm.go:935] updating node { 192.168.39.183 8443 v1.34.2 crio true true} ...
I1217 00:06:56.360214 17911 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-262069 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.183
[Install]
config:
{KubernetesVersion:v1.34.2 ClusterName:addons-262069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1217 00:06:56.360303 17911 ssh_runner.go:195] Run: crio config
I1217 00:06:56.414892 17911 cni.go:84] Creating CNI manager for ""
I1217 00:06:56.414923 17911 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1217 00:06:56.414944 17911 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1217 00:06:56.414972 17911 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-262069 NodeName:addons-262069 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1217 00:06:56.415142 17911 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.183
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-262069"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.183"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1217 00:06:56.415217 17911 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
I1217 00:06:56.428454 17911 binaries.go:51] Found k8s binaries, skipping transfer
I1217 00:06:56.428541 17911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1217 00:06:56.441469 17911 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I1217 00:06:56.464193 17911 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1217 00:06:56.487104 17911 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
I1217 00:06:56.509096 17911 ssh_runner.go:195] Run: grep 192.168.39.183 control-plane.minikube.internal$ /etc/hosts
I1217 00:06:56.513592 17911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.183 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1217 00:06:56.529346 17911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 00:06:56.670337 17911 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1217 00:06:56.706988 17911 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069 for IP: 192.168.39.183
I1217 00:06:56.707014 17911 certs.go:195] generating shared ca certs ...
I1217 00:06:56.707042 17911 certs.go:227] acquiring lock for ca certs: {Name:mk381e1d576792ac916a6048c2225a8ab856de70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:06:56.707233 17911 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key
I1217 00:06:56.760158 17911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt ...
I1217 00:06:56.760187 17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt: {Name:mkb2c08e9d46609296dd89647d95742b5db1a4b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:06:56.760369 17911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key ...
I1217 00:06:56.760382 17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key: {Name:mk7cec444890283789c96bcbb8344d3796e24b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:06:56.760461 17911 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key
I1217 00:06:56.826173 17911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.crt ...
I1217 00:06:56.826204 17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.crt: {Name:mk8eaeff7b342ac9d7fbe6b921ae9ee04f8152f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:06:56.826365 17911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key ...
I1217 00:06:56.826377 17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key: {Name:mk047258c3120e08a69c19fd6689532a7cadbd45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:06:56.826455 17911 certs.go:257] generating profile certs ...
I1217 00:06:56.826510 17911 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.key
I1217 00:06:56.826530 17911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt with IP's: []
I1217 00:06:56.951623 17911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt ...
I1217 00:06:56.951651 17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: {Name:mkb13e009b1a1654f88324d661c047a2b60d50be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:06:56.951802 17911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.key ...
I1217 00:06:56.951814 17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.key: {Name:mkb3b6b6b215aa31da9d982cab9553641a45d235 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:06:56.951879 17911 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.key.c5ec7266
I1217 00:06:56.951897 17911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.crt.c5ec7266 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183]
I1217 00:06:57.170301 17911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.crt.c5ec7266 ...
I1217 00:06:57.170329 17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.crt.c5ec7266: {Name:mkd98bc355df73c446b891110632f2910c5ace14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:06:57.170500 17911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.key.c5ec7266 ...
I1217 00:06:57.170514 17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.key.c5ec7266: {Name:mk2a9e507d293c96915e5ee5adf189f03b6b2c0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:06:57.170584 17911 certs.go:382] copying /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.crt.c5ec7266 -> /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.crt
I1217 00:06:57.170649 17911 certs.go:386] copying /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.key.c5ec7266 -> /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.key
I1217 00:06:57.170695 17911 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/proxy-client.key
I1217 00:06:57.170711 17911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/proxy-client.crt with IP's: []
I1217 00:06:57.274314 17911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/proxy-client.crt ...
I1217 00:06:57.274343 17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/proxy-client.crt: {Name:mkce03c36886d4cd2da2547442c30d7ce503940b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:06:57.274503 17911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/proxy-client.key ...
I1217 00:06:57.274514 17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/proxy-client.key: {Name:mk67157794ec591410a25272dec9e7070cac31fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:06:57.274673 17911 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem (1679 bytes)
I1217 00:06:57.274707 17911 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem (1078 bytes)
I1217 00:06:57.274731 17911 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem (1123 bytes)
I1217 00:06:57.274760 17911 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem (1679 bytes)
I1217 00:06:57.275303 17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1217 00:06:57.309054 17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1217 00:06:57.342756 17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1217 00:06:57.379046 17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1217 00:06:57.419351 17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1217 00:06:57.458505 17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1217 00:06:57.491196 17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1217 00:06:57.523486 17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1217 00:06:57.559687 17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1217 00:06:57.593004 17911 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1217 00:06:57.615695 17911 ssh_runner.go:195] Run: openssl version
I1217 00:06:57.622651 17911 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1217 00:06:57.635985 17911 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1217 00:06:57.648884 17911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1217 00:06:57.654490 17911 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:06 /usr/share/ca-certificates/minikubeCA.pem
I1217 00:06:57.654574 17911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1217 00:06:57.662568 17911 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1217 00:06:57.675685 17911 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1217 00:06:57.688551 17911 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1217 00:06:57.693797 17911 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1217 00:06:57.693852 17911 kubeadm.go:401] StartCluster: {Name:addons-262069 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-262069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1217 00:06:57.693930 17911 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1217 00:06:57.693983 17911 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1217 00:06:57.731296 17911 cri.go:89] found id: ""
I1217 00:06:57.731379 17911 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1217 00:06:57.745397 17911 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1217 00:06:57.758798 17911 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1217 00:06:57.772141 17911 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1217 00:06:57.772161 17911 kubeadm.go:158] found existing configuration files:
I1217 00:06:57.772215 17911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1217 00:06:57.784210 17911 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1217 00:06:57.784276 17911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1217 00:06:57.797070 17911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1217 00:06:57.809671 17911 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1217 00:06:57.809734 17911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1217 00:06:57.822571 17911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1217 00:06:57.834619 17911 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1217 00:06:57.834683 17911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1217 00:06:57.847701 17911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1217 00:06:57.860875 17911 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1217 00:06:57.860939 17911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1217 00:06:57.873821 17911 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1217 00:06:58.033541 17911 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1217 00:07:10.593721 17911 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
I1217 00:07:10.593794 17911 kubeadm.go:319] [preflight] Running pre-flight checks
I1217 00:07:10.593896 17911 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1217 00:07:10.594076 17911 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1217 00:07:10.594204 17911 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1217 00:07:10.594287 17911 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1217 00:07:10.596155 17911 out.go:252] - Generating certificates and keys ...
I1217 00:07:10.596249 17911 kubeadm.go:319] [certs] Using existing ca certificate authority
I1217 00:07:10.596341 17911 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1217 00:07:10.596425 17911 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1217 00:07:10.596530 17911 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1217 00:07:10.596619 17911 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1217 00:07:10.596704 17911 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1217 00:07:10.596792 17911 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1217 00:07:10.596944 17911 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-262069 localhost] and IPs [192.168.39.183 127.0.0.1 ::1]
I1217 00:07:10.597040 17911 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1217 00:07:10.597189 17911 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-262069 localhost] and IPs [192.168.39.183 127.0.0.1 ::1]
I1217 00:07:10.597270 17911 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1217 00:07:10.597366 17911 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1217 00:07:10.597427 17911 kubeadm.go:319] [certs] Generating "sa" key and public key
I1217 00:07:10.597510 17911 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1217 00:07:10.597593 17911 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1217 00:07:10.597673 17911 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1217 00:07:10.597760 17911 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1217 00:07:10.597874 17911 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1217 00:07:10.597938 17911 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1217 00:07:10.598010 17911 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1217 00:07:10.598108 17911 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1217 00:07:10.599830 17911 out.go:252] - Booting up control plane ...
I1217 00:07:10.599932 17911 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1217 00:07:10.600046 17911 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1217 00:07:10.600160 17911 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1217 00:07:10.600309 17911 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1217 00:07:10.600445 17911 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1217 00:07:10.600577 17911 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1217 00:07:10.600682 17911 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1217 00:07:10.600733 17911 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1217 00:07:10.600903 17911 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1217 00:07:10.601057 17911 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1217 00:07:10.601142 17911 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.50295449s
I1217 00:07:10.601257 17911 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1217 00:07:10.601361 17911 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.183:8443/livez
I1217 00:07:10.601483 17911 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1217 00:07:10.601585 17911 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1217 00:07:10.601684 17911 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.009421732s
I1217 00:07:10.601777 17911 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.759014675s
I1217 00:07:10.601868 17911 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.004757659s
I1217 00:07:10.601990 17911 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1217 00:07:10.602163 17911 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1217 00:07:10.602257 17911 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1217 00:07:10.602432 17911 kubeadm.go:319] [mark-control-plane] Marking the node addons-262069 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1217 00:07:10.602521 17911 kubeadm.go:319] [bootstrap-token] Using token: uq1jlh.cbunlm48ja5dh288
I1217 00:07:10.604152 17911 out.go:252] - Configuring RBAC rules ...
I1217 00:07:10.604262 17911 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1217 00:07:10.604403 17911 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1217 00:07:10.604554 17911 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1217 00:07:10.604742 17911 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1217 00:07:10.604913 17911 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1217 00:07:10.605047 17911 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1217 00:07:10.605188 17911 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1217 00:07:10.605258 17911 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1217 00:07:10.605341 17911 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1217 00:07:10.605349 17911 kubeadm.go:319]
I1217 00:07:10.605436 17911 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1217 00:07:10.605444 17911 kubeadm.go:319]
I1217 00:07:10.605544 17911 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1217 00:07:10.605553 17911 kubeadm.go:319]
I1217 00:07:10.605593 17911 kubeadm.go:319] mkdir -p $HOME/.kube
I1217 00:07:10.605677 17911 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1217 00:07:10.605759 17911 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1217 00:07:10.605777 17911 kubeadm.go:319]
I1217 00:07:10.605858 17911 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1217 00:07:10.605872 17911 kubeadm.go:319]
I1217 00:07:10.605938 17911 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1217 00:07:10.605952 17911 kubeadm.go:319]
I1217 00:07:10.606042 17911 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1217 00:07:10.606169 17911 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1217 00:07:10.606271 17911 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1217 00:07:10.606280 17911 kubeadm.go:319]
I1217 00:07:10.606388 17911 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1217 00:07:10.606477 17911 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1217 00:07:10.606486 17911 kubeadm.go:319]
I1217 00:07:10.606597 17911 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token uq1jlh.cbunlm48ja5dh288 \
I1217 00:07:10.606747 17911 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:28cbf36bca9e367b0c14399fa9a279bc1d5d093a4138092f10e2eab3c16dce77 \
I1217 00:07:10.606802 17911 kubeadm.go:319] --control-plane
I1217 00:07:10.606820 17911 kubeadm.go:319]
I1217 00:07:10.606944 17911 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1217 00:07:10.606960 17911 kubeadm.go:319]
I1217 00:07:10.607101 17911 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token uq1jlh.cbunlm48ja5dh288 \
I1217 00:07:10.607277 17911 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:28cbf36bca9e367b0c14399fa9a279bc1d5d093a4138092f10e2eab3c16dce77
I1217 00:07:10.607302 17911 cni.go:84] Creating CNI manager for ""
I1217 00:07:10.607312 17911 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1217 00:07:10.609109 17911 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1217 00:07:10.610448 17911 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1217 00:07:10.628221 17911 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1217 00:07:10.653817 17911 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1217 00:07:10.653964 17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-262069 minikube.k8s.io/updated_at=2025_12_17T00_07_10_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1 minikube.k8s.io/name=addons-262069 minikube.k8s.io/primary=true
I1217 00:07:10.653971 17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 00:07:10.733102 17911 ops.go:34] apiserver oom_adj: -16
I1217 00:07:10.820642 17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 00:07:11.321669 17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 00:07:11.821721 17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 00:07:12.321404 17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 00:07:12.821428 17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 00:07:13.320704 17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 00:07:13.821538 17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 00:07:14.321397 17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 00:07:14.821286 17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 00:07:15.321375 17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 00:07:15.413780 17911 kubeadm.go:1114] duration metric: took 4.759900031s to wait for elevateKubeSystemPrivileges
I1217 00:07:15.413821 17911 kubeadm.go:403] duration metric: took 17.719971777s to StartCluster
I1217 00:07:15.413841 17911 settings.go:142] acquiring lock: {Name:mk0fa06a6a557f0851b041158306daec92094c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:07:15.413977 17911 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22168-12839/kubeconfig
I1217 00:07:15.414444 17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/kubeconfig: {Name:mk0867cff530c231805e36a9674d4fe6612173a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:07:15.414678 17911 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1217 00:07:15.414690 17911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1217 00:07:15.414719 17911 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1217 00:07:15.414841 17911 addons.go:70] Setting yakd=true in profile "addons-262069"
I1217 00:07:15.414862 17911 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-262069"
I1217 00:07:15.414882 17911 addons.go:239] Setting addon yakd=true in "addons-262069"
I1217 00:07:15.414888 17911 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-262069"
I1217 00:07:15.414912 17911 host.go:66] Checking if "addons-262069" exists ...
I1217 00:07:15.414926 17911 addons.go:70] Setting cloud-spanner=true in profile "addons-262069"
I1217 00:07:15.414932 17911 config.go:182] Loaded profile config "addons-262069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:07:15.414945 17911 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-262069"
I1217 00:07:15.414974 17911 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-262069"
I1217 00:07:15.414938 17911 addons.go:239] Setting addon cloud-spanner=true in "addons-262069"
I1217 00:07:15.414991 17911 host.go:66] Checking if "addons-262069" exists ...
I1217 00:07:15.415001 17911 host.go:66] Checking if "addons-262069" exists ...
I1217 00:07:15.415106 17911 addons.go:70] Setting storage-provisioner=true in profile "addons-262069"
I1217 00:07:15.415126 17911 addons.go:239] Setting addon storage-provisioner=true in "addons-262069"
I1217 00:07:15.415134 17911 addons.go:70] Setting gcp-auth=true in profile "addons-262069"
I1217 00:07:15.415154 17911 host.go:66] Checking if "addons-262069" exists ...
I1217 00:07:15.415206 17911 mustload.go:66] Loading cluster: addons-262069
I1217 00:07:15.415207 17911 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-262069"
I1217 00:07:15.415219 17911 addons.go:70] Setting default-storageclass=true in profile "addons-262069"
I1217 00:07:15.415229 17911 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-262069"
I1217 00:07:15.415234 17911 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-262069"
I1217 00:07:15.415254 17911 host.go:66] Checking if "addons-262069" exists ...
I1217 00:07:15.415382 17911 config.go:182] Loaded profile config "addons-262069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:07:15.415533 17911 addons.go:70] Setting registry=true in profile "addons-262069"
I1217 00:07:15.415548 17911 addons.go:239] Setting addon registry=true in "addons-262069"
I1217 00:07:15.415570 17911 host.go:66] Checking if "addons-262069" exists ...
I1217 00:07:15.415939 17911 addons.go:70] Setting volcano=true in profile "addons-262069"
I1217 00:07:15.415961 17911 addons.go:239] Setting addon volcano=true in "addons-262069"
I1217 00:07:15.415986 17911 host.go:66] Checking if "addons-262069" exists ...
I1217 00:07:15.414917 17911 host.go:66] Checking if "addons-262069" exists ...
I1217 00:07:15.416100 17911 addons.go:70] Setting ingress=true in profile "addons-262069"
I1217 00:07:15.416117 17911 addons.go:239] Setting addon ingress=true in "addons-262069"
I1217 00:07:15.416148 17911 host.go:66] Checking if "addons-262069" exists ...
I1217 00:07:15.416510 17911 addons.go:70] Setting volumesnapshots=true in profile "addons-262069"
I1217 00:07:15.416516 17911 addons.go:70] Setting metrics-server=true in profile "addons-262069"
I1217 00:07:15.416548 17911 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-262069"
I1217 00:07:15.414850 17911 addons.go:70] Setting inspektor-gadget=true in profile "addons-262069"
I1217 00:07:15.416563 17911 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-262069"
I1217 00:07:15.416576 17911 addons.go:70] Setting ingress-dns=true in profile "addons-262069"
I1217 00:07:15.416594 17911 addons.go:239] Setting addon ingress-dns=true in "addons-262069"
I1217 00:07:15.416611 17911 addons.go:70] Setting registry-creds=true in profile "addons-262069"
I1217 00:07:15.416624 17911 host.go:66] Checking if "addons-262069" exists ...
I1217 00:07:15.416629 17911 addons.go:239] Setting addon registry-creds=true in "addons-262069"
I1217 00:07:15.416652 17911 host.go:66] Checking if "addons-262069" exists ...
I1217 00:07:15.416549 17911 addons.go:239] Setting addon metrics-server=true in "addons-262069"
I1217 00:07:15.416737 17911 host.go:66] Checking if "addons-262069" exists ...
I1217 00:07:15.416566 17911 addons.go:239] Setting addon inspektor-gadget=true in "addons-262069"
I1217 00:07:15.416919 17911 host.go:66] Checking if "addons-262069" exists ...
I1217 00:07:15.416532 17911 addons.go:239] Setting addon volumesnapshots=true in "addons-262069"
I1217 00:07:15.417201 17911 host.go:66] Checking if "addons-262069" exists ...
I1217 00:07:15.417620 17911 out.go:179] * Verifying Kubernetes components...
I1217 00:07:15.419377 17911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 00:07:15.423461 17911 host.go:66] Checking if "addons-262069" exists ...
I1217 00:07:15.423544 17911 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1217 00:07:15.423548 17911 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1217 00:07:15.423627 17911 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
I1217 00:07:15.423763 17911 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1217 00:07:15.425060 17911 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1217 00:07:15.425076 17911 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1217 00:07:15.425085 17911 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1217 00:07:15.425097 17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1217 00:07:15.425160 17911 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
I1217 00:07:15.425949 17911 addons.go:239] Setting addon default-storageclass=true in "addons-262069"
I1217 00:07:15.425983 17911 host.go:66] Checking if "addons-262069" exists ...
I1217 00:07:15.426142 17911 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1217 00:07:15.426152 17911 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1217 00:07:15.426160 17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
W1217 00:07:15.425427 17911 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1217 00:07:15.427240 17911 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1217 00:07:15.427275 17911 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1217 00:07:15.427287 17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1217 00:07:15.427301 17911 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1217 00:07:15.427541 17911 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-262069"
I1217 00:07:15.428176 17911 host.go:66] Checking if "addons-262069" exists ...
I1217 00:07:15.428458 17911 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1217 00:07:15.428568 17911 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
I1217 00:07:15.429351 17911 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
I1217 00:07:15.429367 17911 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1217 00:07:15.429406 17911 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1217 00:07:15.429425 17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1217 00:07:15.429521 17911 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1217 00:07:15.430213 17911 out.go:179] - Using image docker.io/registry:3.0.0
I1217 00:07:15.430249 17911 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1217 00:07:15.430649 17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1217 00:07:15.431083 17911 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1217 00:07:15.431089 17911 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1217 00:07:15.431638 17911 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1217 00:07:15.431655 17911 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1217 00:07:15.431129 17911 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1217 00:07:15.431783 17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1217 00:07:15.431140 17911 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1217 00:07:15.431953 17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1217 00:07:15.431147 17911 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1217 00:07:15.432055 17911 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1217 00:07:15.432157 17911 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1217 00:07:15.432167 17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1217 00:07:15.435919 17911 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1217 00:07:15.436004 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.436002 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.436033 17911 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1217 00:07:15.436600 17911 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1217 00:07:15.436962 17911 out.go:179] - Using image docker.io/busybox:stable
I1217 00:07:15.437729 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.438183 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:07:15.438623 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.438333 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:07:15.438747 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.439034 17911 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1217 00:07:15.439935 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.440070 17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
I1217 00:07:15.440194 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:07:15.440229 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.440226 17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
I1217 00:07:15.440665 17911 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1217 00:07:15.440668 17911 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1217 00:07:15.441142 17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
I1217 00:07:15.442161 17911 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1217 00:07:15.442327 17911 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1217 00:07:15.442344 17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1217 00:07:15.442344 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.442389 17911 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1217 00:07:15.442409 17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1217 00:07:15.442747 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:07:15.442779 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.443454 17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
I1217 00:07:15.443937 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:07:15.443987 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.445045 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.445269 17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
I1217 00:07:15.445460 17911 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1217 00:07:15.446378 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.446418 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:07:15.446447 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.446779 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.446815 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.446887 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.447273 17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
I1217 00:07:15.447795 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.447843 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:07:15.447862 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.447925 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:07:15.447949 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.447999 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:07:15.448038 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.448127 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:07:15.448159 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.448525 17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
I1217 00:07:15.448563 17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
I1217 00:07:15.448865 17911 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1217 00:07:15.448919 17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
I1217 00:07:15.449305 17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
I1217 00:07:15.449711 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:07:15.449742 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.449912 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.450262 17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
I1217 00:07:15.451348 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:07:15.451379 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.451568 17911 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1217 00:07:15.451717 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.451838 17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
I1217 00:07:15.452663 17911 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1217 00:07:15.452680 17911 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1217 00:07:15.452730 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:07:15.452765 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.452967 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.453233 17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
I1217 00:07:15.453673 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:07:15.453710 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.453878 17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
I1217 00:07:15.455837 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.456327 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:07:15.456358 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:15.456549 17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
W1217 00:07:15.639583 17911 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60052->192.168.39.183:22: read: connection reset by peer
I1217 00:07:15.639621 17911 retry.go:31] will retry after 306.783579ms: ssh: handshake failed: read tcp 192.168.39.1:60052->192.168.39.183:22: read: connection reset by peer
W1217 00:07:15.673610 17911 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60064->192.168.39.183:22: read: connection reset by peer
I1217 00:07:15.673637 17911 retry.go:31] will retry after 222.936771ms: ssh: handshake failed: read tcp 192.168.39.1:60064->192.168.39.183:22: read: connection reset by peer
W1217 00:07:15.676198 17911 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60066->192.168.39.183:22: read: connection reset by peer
I1217 00:07:15.676225 17911 retry.go:31] will retry after 167.114733ms: ssh: handshake failed: read tcp 192.168.39.1:60066->192.168.39.183:22: read: connection reset by peer
I1217 00:07:16.058121 17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
I1217 00:07:16.058657 17911 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1217 00:07:16.058680 17911 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1217 00:07:16.138043 17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1217 00:07:16.164970 17911 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1217 00:07:16.165080 17911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1217 00:07:16.166109 17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1217 00:07:16.209072 17911 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1217 00:07:16.209096 17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1217 00:07:16.338376 17911 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1217 00:07:16.338410 17911 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1217 00:07:16.389123 17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1217 00:07:16.449627 17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1217 00:07:16.478976 17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1217 00:07:16.481833 17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1217 00:07:16.504160 17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1217 00:07:16.648895 17911 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1217 00:07:16.648920 17911 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1217 00:07:16.779408 17911 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1217 00:07:16.779466 17911 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1217 00:07:16.803054 17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1217 00:07:16.859711 17911 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1217 00:07:16.859734 17911 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1217 00:07:17.056132 17911 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1217 00:07:17.056157 17911 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1217 00:07:17.079409 17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1217 00:07:17.380824 17911 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1217 00:07:17.380857 17911 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1217 00:07:17.418239 17911 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1217 00:07:17.418269 17911 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1217 00:07:17.578626 17911 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1217 00:07:17.578653 17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1217 00:07:17.737935 17911 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1217 00:07:17.737989 17911 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1217 00:07:17.854536 17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1217 00:07:17.892610 17911 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1217 00:07:17.892642 17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1217 00:07:18.128210 17911 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1217 00:07:18.128252 17911 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1217 00:07:18.164557 17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1217 00:07:18.457266 17911 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1217 00:07:18.457302 17911 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1217 00:07:18.525965 17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1217 00:07:18.553683 17911 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1217 00:07:18.553713 17911 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1217 00:07:18.847333 17911 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1217 00:07:18.847357 17911 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1217 00:07:19.020352 17911 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1217 00:07:19.020378 17911 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1217 00:07:19.268051 17911 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1217 00:07:19.268082 17911 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1217 00:07:19.499765 17911 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1217 00:07:19.499793 17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1217 00:07:19.701749 17911 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1217 00:07:19.701773 17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1217 00:07:19.917917 17911 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1217 00:07:19.917946 17911 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1217 00:07:20.028468 17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1217 00:07:20.368063 17911 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1217 00:07:20.368091 17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1217 00:07:20.577675 17911 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1217 00:07:20.577700 17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1217 00:07:20.840316 17911 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1217 00:07:20.840348 17911 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1217 00:07:21.254726 17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1217 00:07:22.980034 17911 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1217 00:07:22.983216 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:22.983697 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:07:22.983721 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:22.983896 17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
I1217 00:07:23.167790 17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.109626629s)
I1217 00:07:23.167891 17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.029816106s)
I1217 00:07:23.167958 17911 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.002958804s)
I1217 00:07:23.168044 17911 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.002905383s)
I1217 00:07:23.168068 17911 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1217 00:07:23.168138 17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.002003478s)
I1217 00:07:23.168202 17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (6.779034562s)
I1217 00:07:23.168906 17911 node_ready.go:35] waiting up to 6m0s for node "addons-262069" to be "Ready" ...
I1217 00:07:23.252435 17911 node_ready.go:49] node "addons-262069" is "Ready"
I1217 00:07:23.252476 17911 node_ready.go:38] duration metric: took 83.538998ms for node "addons-262069" to be "Ready" ...
I1217 00:07:23.252492 17911 api_server.go:52] waiting for apiserver process to appear ...
I1217 00:07:23.252552 17911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1217 00:07:23.448636 17911 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1217 00:07:23.680247 17911 addons.go:239] Setting addon gcp-auth=true in "addons-262069"
I1217 00:07:23.680307 17911 host.go:66] Checking if "addons-262069" exists ...
I1217 00:07:23.682556 17911 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1217 00:07:23.685704 17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:23.686221 17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
I1217 00:07:23.686261 17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
I1217 00:07:23.686475 17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
I1217 00:07:23.832783 17911 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-262069" context rescaled to 1 replicas
I1217 00:07:24.234339 17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.784676615s)
I1217 00:07:24.234433 17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.755423832s)
I1217 00:07:24.234536 17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.752676235s)
I1217 00:07:24.234603 17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.730411912s)
I1217 00:07:24.234650 17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.431569811s)
W1217 00:07:24.341434 17911 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I1217 00:07:26.043785 17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.189209071s)
I1217 00:07:26.043825 17911 addons.go:495] Verifying addon metrics-server=true in "addons-262069"
I1217 00:07:26.043874 17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.879282283s)
I1217 00:07:26.043907 17911 addons.go:495] Verifying addon registry=true in "addons-262069"
I1217 00:07:26.043926 17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.517925943s)
I1217 00:07:26.044838 17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.965384944s)
I1217 00:07:26.044865 17911 addons.go:495] Verifying addon ingress=true in "addons-262069"
I1217 00:07:26.045503 17911 out.go:179] * Verifying registry addon...
I1217 00:07:26.045503 17911 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-262069 service yakd-dashboard -n yakd-dashboard
I1217 00:07:26.046465 17911 out.go:179] * Verifying ingress addon...
I1217 00:07:26.048249 17911 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1217 00:07:26.048856 17911 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1217 00:07:26.077340 17911 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1217 00:07:26.077364 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:26.078257 17911 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1217 00:07:26.078278 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:26.527154 17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.498639754s)
W1217 00:07:26.527202 17911 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1217 00:07:26.527230 17911 retry.go:31] will retry after 288.69288ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1217 00:07:26.650042 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:26.669283 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:26.816202 17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1217 00:07:27.116774 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:27.118059 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:27.459575 17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.204763179s)
I1217 00:07:27.459613 17911 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.207034804s)
I1217 00:07:27.459627 17911 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-262069"
I1217 00:07:27.459643 17911 api_server.go:72] duration metric: took 12.044939006s to wait for apiserver process to appear ...
I1217 00:07:27.459651 17911 api_server.go:88] waiting for apiserver healthz status ...
I1217 00:07:27.459671 17911 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
I1217 00:07:27.459675 17911 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.777091314s)
I1217 00:07:27.461875 17911 out.go:179] * Verifying csi-hostpath-driver addon...
I1217 00:07:27.461891 17911 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1217 00:07:27.463433 17911 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1217 00:07:27.464103 17911 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1217 00:07:27.464857 17911 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1217 00:07:27.464877 17911 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1217 00:07:27.486295 17911 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
ok
I1217 00:07:27.492487 17911 api_server.go:141] control plane version: v1.34.2
I1217 00:07:27.492524 17911 api_server.go:131] duration metric: took 32.866106ms to wait for apiserver health ...
I1217 00:07:27.492534 17911 system_pods.go:43] waiting for kube-system pods to appear ...
I1217 00:07:27.521251 17911 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1217 00:07:27.521280 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:27.522309 17911 system_pods.go:59] 20 kube-system pods found
I1217 00:07:27.522341 17911 system_pods.go:61] "amd-gpu-device-plugin-h7ktx" [868af750-76b7-4d6a-8b9c-c20ef980f23c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1217 00:07:27.522352 17911 system_pods.go:61] "coredns-66bc5c9577-225dx" [d0273678-dce6-4db9-bdb2-ba3a3c08cdef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1217 00:07:27.522364 17911 system_pods.go:61] "coredns-66bc5c9577-qx99m" [1a417056-e982-4783-96a5-9b741dd696d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1217 00:07:27.522373 17911 system_pods.go:61] "csi-hostpath-attacher-0" [43ce0e61-2925-4f54-90f3-f9f854f69d01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1217 00:07:27.522379 17911 system_pods.go:61] "csi-hostpath-resizer-0" [8766aba1-494f-4e3d-92ae-fefb28e912b7] Pending
I1217 00:07:27.522388 17911 system_pods.go:61] "csi-hostpathplugin-bl7k4" [8f24a367-b121-47d8-961b-5dc07a0a08db] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1217 00:07:27.522395 17911 system_pods.go:61] "etcd-addons-262069" [94f8ab0a-9019-4c23-aa63-c66aee255be9] Running
I1217 00:07:27.522402 17911 system_pods.go:61] "kube-apiserver-addons-262069" [5973422d-5e3e-40b5-88f8-ce163eec138a] Running
I1217 00:07:27.522407 17911 system_pods.go:61] "kube-controller-manager-addons-262069" [95a9bcee-f05e-4599-9cb1-dff560827c59] Running
I1217 00:07:27.522416 17911 system_pods.go:61] "kube-ingress-dns-minikube" [a72b7afb-8519-407e-93cc-fb6d4827edf6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1217 00:07:27.522422 17911 system_pods.go:61] "kube-proxy-pdf4s" [c6e7cf26-13ad-48d5-8dc7-8bdc4518f890] Running
I1217 00:07:27.522431 17911 system_pods.go:61] "kube-scheduler-addons-262069" [52be5dac-ed10-4237-a532-22849ffcf509] Running
I1217 00:07:27.522441 17911 system_pods.go:61] "metrics-server-85b7d694d7-94n2m" [9b665994-667f-4a3b-b44d-9949b0c4761c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1217 00:07:27.522451 17911 system_pods.go:61] "nvidia-device-plugin-daemonset-wb64t" [7e312275-8868-442b-bb94-0569b43cbe03] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1217 00:07:27.522463 17911 system_pods.go:61] "registry-6b586f9694-z9bzt" [15209453-1113-446e-94b5-19d615f67036] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1217 00:07:27.522475 17911 system_pods.go:61] "registry-creds-764b6fb674-7r5ht" [8dac2506-ca74-4027-a05b-112bb00523e9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1217 00:07:27.522484 17911 system_pods.go:61] "registry-proxy-ng2lx" [f39654e9-51f3-4325-9568-3999f3904260] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1217 00:07:27.522493 17911 system_pods.go:61] "snapshot-controller-7d9fbc56b8-85jjc" [3452da8d-e4e0-4ca4-b768-3379b6b892c6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1217 00:07:27.522506 17911 system_pods.go:61] "snapshot-controller-7d9fbc56b8-s748j" [e6a06856-562a-45d6-af80-78e109d24a5e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1217 00:07:27.522513 17911 system_pods.go:61] "storage-provisioner" [68b668f5-e60f-44f4-8df7-5378eb708ccc] Running
I1217 00:07:27.522523 17911 system_pods.go:74] duration metric: took 29.982264ms to wait for pod list to return data ...
I1217 00:07:27.522534 17911 default_sa.go:34] waiting for default service account to be created ...
I1217 00:07:27.563155 17911 default_sa.go:45] found service account: "default"
I1217 00:07:27.563179 17911 default_sa.go:55] duration metric: took 40.636257ms for default service account to be created ...
I1217 00:07:27.563187 17911 system_pods.go:116] waiting for k8s-apps to be running ...
I1217 00:07:27.590074 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:27.594764 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:27.595156 17911 system_pods.go:86] 20 kube-system pods found
I1217 00:07:27.595184 17911 system_pods.go:89] "amd-gpu-device-plugin-h7ktx" [868af750-76b7-4d6a-8b9c-c20ef980f23c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1217 00:07:27.595194 17911 system_pods.go:89] "coredns-66bc5c9577-225dx" [d0273678-dce6-4db9-bdb2-ba3a3c08cdef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1217 00:07:27.595215 17911 system_pods.go:89] "coredns-66bc5c9577-qx99m" [1a417056-e982-4783-96a5-9b741dd696d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1217 00:07:27.595228 17911 system_pods.go:89] "csi-hostpath-attacher-0" [43ce0e61-2925-4f54-90f3-f9f854f69d01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1217 00:07:27.595237 17911 system_pods.go:89] "csi-hostpath-resizer-0" [8766aba1-494f-4e3d-92ae-fefb28e912b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1217 00:07:27.595248 17911 system_pods.go:89] "csi-hostpathplugin-bl7k4" [8f24a367-b121-47d8-961b-5dc07a0a08db] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1217 00:07:27.595254 17911 system_pods.go:89] "etcd-addons-262069" [94f8ab0a-9019-4c23-aa63-c66aee255be9] Running
I1217 00:07:27.595260 17911 system_pods.go:89] "kube-apiserver-addons-262069" [5973422d-5e3e-40b5-88f8-ce163eec138a] Running
I1217 00:07:27.595269 17911 system_pods.go:89] "kube-controller-manager-addons-262069" [95a9bcee-f05e-4599-9cb1-dff560827c59] Running
I1217 00:07:27.595277 17911 system_pods.go:89] "kube-ingress-dns-minikube" [a72b7afb-8519-407e-93cc-fb6d4827edf6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1217 00:07:27.595282 17911 system_pods.go:89] "kube-proxy-pdf4s" [c6e7cf26-13ad-48d5-8dc7-8bdc4518f890] Running
I1217 00:07:27.595288 17911 system_pods.go:89] "kube-scheduler-addons-262069" [52be5dac-ed10-4237-a532-22849ffcf509] Running
I1217 00:07:27.595296 17911 system_pods.go:89] "metrics-server-85b7d694d7-94n2m" [9b665994-667f-4a3b-b44d-9949b0c4761c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1217 00:07:27.595305 17911 system_pods.go:89] "nvidia-device-plugin-daemonset-wb64t" [7e312275-8868-442b-bb94-0569b43cbe03] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1217 00:07:27.595323 17911 system_pods.go:89] "registry-6b586f9694-z9bzt" [15209453-1113-446e-94b5-19d615f67036] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1217 00:07:27.595332 17911 system_pods.go:89] "registry-creds-764b6fb674-7r5ht" [8dac2506-ca74-4027-a05b-112bb00523e9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1217 00:07:27.595341 17911 system_pods.go:89] "registry-proxy-ng2lx" [f39654e9-51f3-4325-9568-3999f3904260] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1217 00:07:27.595349 17911 system_pods.go:89] "snapshot-controller-7d9fbc56b8-85jjc" [3452da8d-e4e0-4ca4-b768-3379b6b892c6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1217 00:07:27.595361 17911 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s748j" [e6a06856-562a-45d6-af80-78e109d24a5e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1217 00:07:27.595367 17911 system_pods.go:89] "storage-provisioner" [68b668f5-e60f-44f4-8df7-5378eb708ccc] Running
I1217 00:07:27.595376 17911 system_pods.go:126] duration metric: took 32.182806ms to wait for k8s-apps to be running ...
I1217 00:07:27.595389 17911 system_svc.go:44] waiting for kubelet service to be running ....
I1217 00:07:27.595438 17911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1217 00:07:27.607341 17911 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1217 00:07:27.607372 17911 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1217 00:07:27.676753 17911 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1217 00:07:27.676780 17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1217 00:07:27.768809 17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1217 00:07:27.972240 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:28.054596 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:28.056732 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:28.468935 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:28.533122 17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.716865508s)
I1217 00:07:28.533180 17911 system_svc.go:56] duration metric: took 937.784294ms WaitForService to wait for kubelet
I1217 00:07:28.533204 17911 kubeadm.go:587] duration metric: took 13.118498038s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1217 00:07:28.533232 17911 node_conditions.go:102] verifying NodePressure condition ...
I1217 00:07:28.539774 17911 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1217 00:07:28.539825 17911 node_conditions.go:123] node cpu capacity is 2
I1217 00:07:28.539847 17911 node_conditions.go:105] duration metric: took 6.608212ms to run NodePressure ...
I1217 00:07:28.539863 17911 start.go:242] waiting for startup goroutines ...
I1217 00:07:28.553473 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:28.554524 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:28.999577 17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.230727711s)
I1217 00:07:29.000949 17911 addons.go:495] Verifying addon gcp-auth=true in "addons-262069"
I1217 00:07:29.002934 17911 out.go:179] * Verifying gcp-auth addon...
I1217 00:07:29.005094 17911 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1217 00:07:29.042158 17911 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1217 00:07:29.042188 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:29.042355 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:29.067224 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:29.080857 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:29.473399 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:29.512362 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:29.552671 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:29.557419 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:29.970767 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:30.014300 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:30.054996 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:30.055047 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:30.471181 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:30.510940 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:30.564680 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:30.564860 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:30.970464 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:31.015240 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:31.076276 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:31.079281 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:31.471412 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:31.570893 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:31.571280 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:31.572637 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:31.969395 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:32.011305 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:32.052696 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:32.056038 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:32.471445 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:32.512292 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:32.553879 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:32.553962 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:32.969358 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:33.013273 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:33.057251 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:33.057859 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:33.472261 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:33.569255 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:33.574109 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:33.574222 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:33.971003 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:34.010644 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:34.072501 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:34.072537 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:34.468875 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:34.509298 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:34.570588 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:34.571139 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:34.972720 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:35.009941 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:35.053140 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:35.054119 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:35.471063 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:35.509410 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:35.555736 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:35.558843 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:35.972347 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:36.011546 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:36.058257 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:36.062713 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:36.472999 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:36.514512 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:36.553748 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:36.554050 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:36.972555 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:37.015664 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:37.052848 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:37.053512 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:37.470244 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:37.574528 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:37.575162 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:37.575699 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:37.969611 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:38.011709 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:38.054922 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:38.057083 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:38.468425 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:38.508424 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:38.559737 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:38.561254 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:38.969313 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:39.012056 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:39.054166 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:39.056761 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:39.470884 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:39.512251 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:39.556044 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:39.557062 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:39.969538 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:40.010513 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:40.056688 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:40.056959 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:40.469650 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:40.511169 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:40.557368 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:40.558850 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:41.259916 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:41.260168 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:41.260171 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:41.264239 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:41.471462 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:41.511527 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:41.555469 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:41.555980 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:41.970975 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:42.010083 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:42.057642 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:42.062488 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:42.470877 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:42.509878 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:42.553476 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:42.553565 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:42.969260 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:43.012571 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:43.056865 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:43.060055 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:43.531191 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:43.531224 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:43.553797 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:43.557249 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:43.971917 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:44.019077 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:44.053902 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:44.057789 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:44.488323 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:44.514408 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:44.553272 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:44.554800 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:45.045001 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:45.045365 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:45.055144 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:45.057990 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:45.472257 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:45.513061 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:45.558944 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:45.560482 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:45.970331 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:46.015174 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:46.054644 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:46.058513 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:46.470734 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:46.510655 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:46.553569 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:46.554994 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:46.968868 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:47.009887 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:47.053569 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:47.053790 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:47.468891 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:47.510686 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:47.552893 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:47.552953 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:47.970246 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:48.014072 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:48.055170 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:48.057117 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:48.471174 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:48.509328 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:48.552937 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:48.556235 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:48.970179 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:49.011217 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:49.057591 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:49.058824 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:49.468196 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:49.508800 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:49.554820 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:49.558328 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:49.969540 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:50.009884 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:50.051990 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:50.053209 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:50.468847 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:50.509267 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:50.552710 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:50.554566 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:50.969369 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:51.014234 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:51.056878 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:51.057504 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:51.469130 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:51.510401 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:51.556212 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:51.558817 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:51.973892 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:52.009854 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:52.054791 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:52.055353 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:52.470441 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:52.510752 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:52.555734 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:52.555784 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:52.968764 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:53.009788 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:53.053007 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:53.053298 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:53.469533 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:53.508486 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:53.552210 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:53.554232 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:53.969955 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:54.009178 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:54.057674 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:54.058629 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 00:07:54.471903 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:54.509508 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:54.553879 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:54.557422 17911 kapi.go:107] duration metric: took 28.509174545s to wait for kubernetes.io/minikube-addons=registry ...
I1217 00:07:54.971358 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:55.009735 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:55.055734 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:55.469255 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:55.508425 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:55.579166 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:55.978059 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:56.015536 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:56.062652 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:56.474340 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:56.509394 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:56.554907 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:56.969166 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:57.009250 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:57.053589 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:57.476444 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:57.510305 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:57.555861 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:57.969310 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:58.013711 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:58.055842 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:58.469585 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:58.514571 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:58.553119 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:58.971756 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:59.009835 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:59.056749 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:59.469757 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:07:59.511728 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:07:59.554137 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:07:59.971284 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:00.012673 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:00.054011 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:00.472353 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:00.511550 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:00.553731 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:00.972995 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:01.010764 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:01.053100 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:01.488357 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:01.509975 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:01.554277 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:01.971269 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:02.011234 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:02.056597 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:02.475814 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:02.512259 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:02.614072 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:02.970593 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:03.010930 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:03.053932 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:03.470658 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:03.511436 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:03.556126 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:03.970729 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:04.018261 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:04.056001 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:04.469804 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:04.509283 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:04.554560 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:04.969258 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:05.011237 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:05.052809 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:05.472488 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:05.508429 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:05.567004 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:05.970218 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:06.009070 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:06.054821 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:06.471710 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:06.573582 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:06.574152 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:06.978809 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:07.013232 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:07.052988 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:07.467068 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:07.512956 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:07.553371 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:07.972518 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:08.072528 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:08.073356 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:08.476574 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:08.519301 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:08.558623 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:08.969946 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:09.012787 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:09.055728 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:09.469564 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:09.511178 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:09.555968 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:09.979287 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:10.076983 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:10.077150 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:10.469048 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:10.512947 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:10.556336 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:10.970999 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:11.014462 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:11.053571 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:11.476203 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:11.574896 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:11.575003 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:11.972255 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:12.012160 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:12.075209 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:12.468618 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:12.510036 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:12.554967 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:12.970328 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:13.014528 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:13.055264 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:13.475150 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:13.513871 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:13.581751 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:13.971462 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:14.009609 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:14.053817 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:14.570064 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:14.570191 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:14.571144 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:14.978604 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:15.076423 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:15.076429 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:15.480512 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:15.511903 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:15.578239 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:15.970448 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:16.013445 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:16.057961 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:16.469867 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:16.512087 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:16.554339 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:16.971922 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:17.011014 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:17.052966 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:17.469243 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:17.509399 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:17.555192 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:17.972321 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:18.020234 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:18.054373 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:18.472304 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:18.510741 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:18.556346 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:18.973157 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:19.008601 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:19.070752 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:19.471441 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:19.511336 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:19.554512 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:19.969096 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:20.008968 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:20.056104 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:20.474947 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:20.510417 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:20.555208 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:20.970771 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:21.068479 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:21.069291 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:21.470745 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:21.509824 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:21.555249 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:21.968516 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:22.008772 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:22.057314 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:22.471335 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 00:08:22.571671 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:22.571906 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:22.968208 17911 kapi.go:107] duration metric: took 55.504099804s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1217 00:08:23.010557 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:23.052999 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:23.508774 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:23.553503 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:24.009227 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:24.053502 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:24.510301 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:24.553164 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:25.008647 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:25.052906 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:25.509618 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:25.553668 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:26.009952 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:26.053329 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:26.509501 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:26.553798 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:27.010128 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:27.054112 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:27.509695 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:27.555314 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:28.009094 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:28.052736 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:28.509424 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:28.553089 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:29.009129 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:29.052669 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:29.510043 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:29.553256 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:30.013488 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:30.056057 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:30.508821 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:30.556323 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:31.012048 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:31.056588 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:31.512262 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:31.560472 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:32.015258 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:32.052990 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:32.514186 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:32.554089 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:33.011346 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:33.055098 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:33.508877 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:33.555467 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:34.014899 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:34.057481 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:34.513828 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:34.555227 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:35.011931 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:35.053287 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:35.510461 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:35.553166 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:36.010311 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:36.054203 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:36.511255 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:36.554447 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:37.010713 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:37.054440 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:37.510371 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:37.555966 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:38.009168 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:38.053430 17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 00:08:38.516403 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:38.565621 17911 kapi.go:107] duration metric: took 1m12.516758297s to wait for app.kubernetes.io/name=ingress-nginx ...
I1217 00:08:39.010096 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:39.515168 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:40.015245 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:40.510670 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:41.011147 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:41.510552 17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 00:08:42.009825 17911 kapi.go:107] duration metric: took 1m13.004727494s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1217 00:08:42.011806 17911 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-262069 cluster.
I1217 00:08:42.013451 17911 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1217 00:08:42.014720 17911 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1217 00:08:42.016390 17911 out.go:179] * Enabled addons: inspektor-gadget, storage-provisioner, ingress-dns, amd-gpu-device-plugin, nvidia-device-plugin, registry-creds, cloud-spanner, default-storageclass, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I1217 00:08:42.017557 17911 addons.go:530] duration metric: took 1m26.602845252s for enable addons: enabled=[inspektor-gadget storage-provisioner ingress-dns amd-gpu-device-plugin nvidia-device-plugin registry-creds cloud-spanner default-storageclass metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I1217 00:08:42.017615 17911 start.go:247] waiting for cluster config update ...
I1217 00:08:42.017645 17911 start.go:256] writing updated cluster config ...
I1217 00:08:42.017992 17911 ssh_runner.go:195] Run: rm -f paused
I1217 00:08:42.029830 17911 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1217 00:08:42.110359 17911 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-225dx" in "kube-system" namespace to be "Ready" or be gone ...
I1217 00:08:42.115474 17911 pod_ready.go:94] pod "coredns-66bc5c9577-225dx" is "Ready"
I1217 00:08:42.115513 17911 pod_ready.go:86] duration metric: took 5.121006ms for pod "coredns-66bc5c9577-225dx" in "kube-system" namespace to be "Ready" or be gone ...
I1217 00:08:42.117735 17911 pod_ready.go:83] waiting for pod "etcd-addons-262069" in "kube-system" namespace to be "Ready" or be gone ...
I1217 00:08:42.123956 17911 pod_ready.go:94] pod "etcd-addons-262069" is "Ready"
I1217 00:08:42.123984 17911 pod_ready.go:86] duration metric: took 6.214519ms for pod "etcd-addons-262069" in "kube-system" namespace to be "Ready" or be gone ...
I1217 00:08:42.126497 17911 pod_ready.go:83] waiting for pod "kube-apiserver-addons-262069" in "kube-system" namespace to be "Ready" or be gone ...
I1217 00:08:42.132058 17911 pod_ready.go:94] pod "kube-apiserver-addons-262069" is "Ready"
I1217 00:08:42.132088 17911 pod_ready.go:86] duration metric: took 5.566687ms for pod "kube-apiserver-addons-262069" in "kube-system" namespace to be "Ready" or be gone ...
I1217 00:08:42.134190 17911 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-262069" in "kube-system" namespace to be "Ready" or be gone ...
I1217 00:08:42.434687 17911 pod_ready.go:94] pod "kube-controller-manager-addons-262069" is "Ready"
I1217 00:08:42.434722 17911 pod_ready.go:86] duration metric: took 300.501021ms for pod "kube-controller-manager-addons-262069" in "kube-system" namespace to be "Ready" or be gone ...
I1217 00:08:42.636062 17911 pod_ready.go:83] waiting for pod "kube-proxy-pdf4s" in "kube-system" namespace to be "Ready" or be gone ...
I1217 00:08:43.034447 17911 pod_ready.go:94] pod "kube-proxy-pdf4s" is "Ready"
I1217 00:08:43.034482 17911 pod_ready.go:86] duration metric: took 398.388512ms for pod "kube-proxy-pdf4s" in "kube-system" namespace to be "Ready" or be gone ...
I1217 00:08:43.233990 17911 pod_ready.go:83] waiting for pod "kube-scheduler-addons-262069" in "kube-system" namespace to be "Ready" or be gone ...
I1217 00:08:43.634302 17911 pod_ready.go:94] pod "kube-scheduler-addons-262069" is "Ready"
I1217 00:08:43.634338 17911 pod_ready.go:86] duration metric: took 400.293515ms for pod "kube-scheduler-addons-262069" in "kube-system" namespace to be "Ready" or be gone ...
I1217 00:08:43.634357 17911 pod_ready.go:40] duration metric: took 1.604489345s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1217 00:08:43.711172 17911 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
I1217 00:08:43.712755 17911 out.go:179] * Done! kubectl is now configured to use "addons-262069" cluster and "default" namespace by default
==> CRI-O <==
Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.212806499Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0106d92d-ac83-4ff6-aa21-39a08f015b5f name=/runtime.v1.RuntimeService/Version
Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.214247864Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=81498190-5c1e-4e7c-b3c9-e45ecd9c4d5b name=/runtime.v1.ImageService/ImageFsInfo
Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.215750041Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765930299215720682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:554377,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81498190-5c1e-4e7c-b3c9-e45ecd9c4d5b name=/runtime.v1.ImageService/ImageFsInfo
Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.216586787Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f528892-828a-4e14-95ff-edc9a72ebd3a name=/runtime.v1.RuntimeService/ListContainers
Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.216661187Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f528892-828a-4e14-95ff-edc9a72ebd3a name=/runtime.v1.RuntimeService/ListContainers
Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.217584622Z" level=debug msg="Setting stage for resource k8s_hello-world-app_hello-world-app-5d498dc89-98t54_default_5d5f0ee3-96d7-4fd9-a8f1-c32bda978dc4_0 from container spec configuration to container runtime creation" file="resourcestore/resourcestore.go:227" id=a60b09d6-158c-436a-8b1c-c5be80434b30 name=/runtime.v1.RuntimeService/CreateContainer
Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.217670705Z" level=debug msg="running conmon: /usr/bin/conmon" args="[-b /var/run/containers/storage/overlay-containers/96af083870aa83e5678ae1706f84ca43c468921acb244c7effa6c016aa127bcf/userdata -c 96af083870aa83e5678ae1706f84ca43c468921acb244c7effa6c016aa127bcf --exit-dir /var/run/crio/exits -l /var/log/pods/default_hello-world-app-5d498dc89-98t54_5d5f0ee3-96d7-4fd9-a8f1-c32bda978dc4/hello-world-app/0.log --log-level debug -n k8s_hello-world-app_hello-world-app-5d498dc89-98t54_default_5d5f0ee3-96d7-4fd9-a8f1-c32bda978dc4_0 -P /var/run/containers/storage/overlay-containers/96af083870aa83e5678ae1706f84ca43c468921acb244c7effa6c016aa127bcf/userdata/conmon-pidfile -p /var/run/containers/storage/overlay-containers/96af083870aa83e5678ae1706f84ca43c468921acb244c7effa6c016aa127bcf/userdata/pidfile --persist-dir /var/lib/containers/storage/overlay-containers/96af083870aa83e5678ae1706f84ca43c468921acb244c7effa6c016aa127bcf/userdata -r /usr/bin/ru
nc --runtime-arg --root=/run/runc --socket-dir-path /var/run/crio --syslog -u 96af083870aa83e5678ae1706f84ca43c468921acb244c7effa6c016aa127bcf]" file="oci/runtime_oci.go:168" id=a60b09d6-158c-436a-8b1c-c5be80434b30 name=/runtime.v1.RuntimeService/CreateContainer
Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.219221518Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:da37136af4866b03c248d101bf3269e6e1507fe8823a2906d0743fa7e91a0fd0,PodSandboxId:95c9514edc6fdc5390e19cfcb6a451f0582ff2c73d50270cb9324b98a2a87e42,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765930155824691343,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d05a1d3-b173-402d-b417-d11ed3f1e38b,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47e8cf37ec48ffecac4366103fda90e67bbcfe4a41f098615a5749642e1e6c2,PodSandboxId:a0c6cedad82797adde6f3c570e1a006e2c0fdb2d4e546aa650c6e5516137527b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765930127335434595,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a6b0152-c8cd-4b61-8658-a844c2dedd65,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d791d8371392dd47d7174e66361893c24207e1bf308c7ab82681f9de907ab776,PodSandboxId:3888987b0ab2bab41431a7c0bac1f7b6806bbfe59a0ac2a7f3f36a3856e4f748,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765930118225822713,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-qhfmc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8105d8cc-5b94-4c6a-bee1-54b1e14b6391,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5a73bf3ec068e571089e68f105e4ab7e44acde052e9eb95de7b608a4fc09be6c,PodSandboxId:9480d585d56fbb92e05ff3308b81c006069c346d8aa9c21b5bd4fc7e4991197e,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765930090669111589,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d56md,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9ae3f8a-34e3-471c-8324-23bee411de9d,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99842303bb27853949e0f1665f8390690a352102ef3556aa78ab8080a15ac570,PodSandboxId:6178d285bfbd316561b170df74570c6719d0c89544a0c87043e7ec65f534e66a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765930089764496270,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nx5df,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 36ee754c-16ce-4b51-a73b-e9b7f470849a,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d12b37b795e3391d909d4371fdf670fdeaf7ee2c6921a88491d91f4007f0bc0e,PodSandboxId:b3a444a50c80f6945a02f6ad9ce3b921129fddee6795b33c61bc26fba15308f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765930086244761959,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-qdlpj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: cc8b49e9-68ff-4324-874a-662d24fed8c2,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf166d852d17d2293c0962f69700a90a7d0de70a404f0a1d773b83e67bb68849,PodSandboxId:c4648e07535e2a80e2afe73a882d6f0bd6b561fd5979695b9a30bf3a345caa74,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7
,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765930067342465709,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72b7afb-8519-407e-93cc-fb6d4827edf6,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76d2536ddb378019184abc273182b5c9efc0671d0e5a07283e39a77e7463bac,PodSandboxId:b6187cb2f2b25b1c6aa7a065827616b5afbecdadadd21d66e100baef0b18bc54,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0
,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765930054053557622,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-h7ktx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868af750-76b7-4d6a-8b9c-c20ef980f23c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd1fdba2b689a084415531f7474442db44470fbe88cccf6cc431a5d63e3e0f4e,PodSandboxId:b1304a0bf9b4b4914f299b6fc14724b72425d8a0fe187b3ef18eade6322683dc,Metadata:&ContainerMetad
ata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765930044628296994,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b668f5-e60f-44f4-8df7-5378eb708ccc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1e5703f653f9d5f4dbdfde28ffe80fb515d7b12142a5417d50714466645732,PodSandboxId:fa54351057863bdcc9ea220db693cbcc7c16ab52d48588e0af8f15e9c57844a3,Metadata:&ContainerMetadata{Name:cor
edns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765930037664588312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-225dx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0273678-dce6-4db9-bdb2-ba3a3c08cdef,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6f4f5a400e23f398e0aab5335420b4e49cdde8aa1f8aa33525397d22505556,PodSandboxId:86c542fe315234e3a8bc67df05ce934e338a3c1040a4e5ccc2fbee483b264027,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765930036784568111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pdf4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6e7cf26-13ad-48d5-8dc7-8bdc4518f890,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2fbdca13e384bcbfd3fbdaf9d95ad5967c5096ccf2372b109699fddf5e0bba5,PodSandboxId:1b5db531bb4eb31668424c055dede534a8da8a5336328e8f28129ca22af6eb4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765930023925855744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-262069,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70f87735738bf609c468945d5b40c70e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.containe
r.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:375c642c900b45d46f1a83108aa9915adf2f8a5967893585a022990a60789ab1,PodSandboxId:d1115a328d57639bfb7928690a82aad17c808148b17f126b75a24f7667c5a552,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765930023873653127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-262069,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f259204777a715bea40fd47e464c877,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernete
s.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67450bc656f73dd9235124553c7b9a80e9f2e5403b09204044ae68765e6cdd43,PodSandboxId:3ea99958ef1d6f741f302c568fe7f2b53e69e4333c5c83d3589e686c80feb199,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765930023894190560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-262069,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea9f337cf9
613b55c21a1b74e0c76d0b,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2108cbe18ef2e4cc687c754ecd34e7173cf6b37c68d3d441e41aa01b0f6b4ba3,PodSandboxId:cc00906076d954b41db3a94a7da98450d8204b4e09c237d59f3bb2e96bca3338,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765930023869582556,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-addons-262069,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0c8a349d17e38fe2a6b518411e1f43b,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f528892-828a-4e14-95ff-edc9a72ebd3a name=/runtime.v1.RuntimeService/ListContainers
Dec 17 00:11:39 addons-262069 conmon[12395]: conmon 96af083870aa83e5678a <ndebug>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach}
Dec 17 00:11:39 addons-262069 conmon[12395]: conmon 96af083870aa83e5678a <ndebug>: terminal_ctrl_fd: 12
Dec 17 00:11:39 addons-262069 conmon[12395]: conmon 96af083870aa83e5678a <ndebug>: winsz read side: 16, winsz write side: 17
Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.269826988Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb7c770a-8126-4f0e-a753-d11701f777da name=/runtime.v1.RuntimeService/Version
Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.269924101Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb7c770a-8126-4f0e-a753-d11701f777da name=/runtime.v1.RuntimeService/Version
Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.273943652Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0b454a2d-9afe-45c5-9a6e-f604a977a55e name=/runtime.v1.ImageService/ImageFsInfo
Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.277511613Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765930299277285560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:554377,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b454a2d-9afe-45c5-9a6e-f604a977a55e name=/runtime.v1.ImageService/ImageFsInfo
Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.278677154Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ec42285-9a68-4a03-b6ab-40eed604748b name=/runtime.v1.RuntimeService/ListContainers
Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.278739289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ec42285-9a68-4a03-b6ab-40eed604748b name=/runtime.v1.RuntimeService/ListContainers
Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.280073261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:da37136af4866b03c248d101bf3269e6e1507fe8823a2906d0743fa7e91a0fd0,PodSandboxId:95c9514edc6fdc5390e19cfcb6a451f0582ff2c73d50270cb9324b98a2a87e42,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765930155824691343,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d05a1d3-b173-402d-b417-d11ed3f1e38b,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47e8cf37ec48ffecac4366103fda90e67bbcfe4a41f098615a5749642e1e6c2,PodSandboxId:a0c6cedad82797adde6f3c570e1a006e2c0fdb2d4e546aa650c6e5516137527b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765930127335434595,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a6b0152-c8cd-4b61-8658-a844c2dedd65,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d791d8371392dd47d7174e66361893c24207e1bf308c7ab82681f9de907ab776,PodSandboxId:3888987b0ab2bab41431a7c0bac1f7b6806bbfe59a0ac2a7f3f36a3856e4f748,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765930118225822713,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-qhfmc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8105d8cc-5b94-4c6a-bee1-54b1e14b6391,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5a73bf3ec068e571089e68f105e4ab7e44acde052e9eb95de7b608a4fc09be6c,PodSandboxId:9480d585d56fbb92e05ff3308b81c006069c346d8aa9c21b5bd4fc7e4991197e,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765930090669111589,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d56md,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9ae3f8a-34e3-471c-8324-23bee411de9d,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99842303bb27853949e0f1665f8390690a352102ef3556aa78ab8080a15ac570,PodSandboxId:6178d285bfbd316561b170df74570c6719d0c89544a0c87043e7ec65f534e66a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765930089764496270,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nx5df,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 36ee754c-16ce-4b51-a73b-e9b7f470849a,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d12b37b795e3391d909d4371fdf670fdeaf7ee2c6921a88491d91f4007f0bc0e,PodSandboxId:b3a444a50c80f6945a02f6ad9ce3b921129fddee6795b33c61bc26fba15308f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765930086244761959,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-qdlpj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: cc8b49e9-68ff-4324-874a-662d24fed8c2,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf166d852d17d2293c0962f69700a90a7d0de70a404f0a1d773b83e67bb68849,PodSandboxId:c4648e07535e2a80e2afe73a882d6f0bd6b561fd5979695b9a30bf3a345caa74,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7
,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765930067342465709,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72b7afb-8519-407e-93cc-fb6d4827edf6,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76d2536ddb378019184abc273182b5c9efc0671d0e5a07283e39a77e7463bac,PodSandboxId:b6187cb2f2b25b1c6aa7a065827616b5afbecdadadd21d66e100baef0b18bc54,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0
,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765930054053557622,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-h7ktx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868af750-76b7-4d6a-8b9c-c20ef980f23c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd1fdba2b689a084415531f7474442db44470fbe88cccf6cc431a5d63e3e0f4e,PodSandboxId:b1304a0bf9b4b4914f299b6fc14724b72425d8a0fe187b3ef18eade6322683dc,Metadata:&ContainerMetad
ata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765930044628296994,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b668f5-e60f-44f4-8df7-5378eb708ccc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1e5703f653f9d5f4dbdfde28ffe80fb515d7b12142a5417d50714466645732,PodSandboxId:fa54351057863bdcc9ea220db693cbcc7c16ab52d48588e0af8f15e9c57844a3,Metadata:&ContainerMetadata{Name:cor
edns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765930037664588312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-225dx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0273678-dce6-4db9-bdb2-ba3a3c08cdef,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6f4f5a400e23f398e0aab5335420b4e49cdde8aa1f8aa33525397d22505556,PodSandboxId:86c542fe315234e3a8bc67df05ce934e338a3c1040a4e5ccc2fbee483b264027,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765930036784568111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pdf4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6e7cf26-13ad-48d5-8dc7-8bdc4518f890,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2fbdca13e384bcbfd3fbdaf9d95ad5967c5096ccf2372b109699fddf5e0bba5,PodSandboxId:1b5db531bb4eb31668424c055dede534a8da8a5336328e8f28129ca22af6eb4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765930023925855744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-262069,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70f87735738bf609c468945d5b40c70e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.containe
r.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:375c642c900b45d46f1a83108aa9915adf2f8a5967893585a022990a60789ab1,PodSandboxId:d1115a328d57639bfb7928690a82aad17c808148b17f126b75a24f7667c5a552,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765930023873653127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-262069,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f259204777a715bea40fd47e464c877,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernete
s.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67450bc656f73dd9235124553c7b9a80e9f2e5403b09204044ae68765e6cdd43,PodSandboxId:3ea99958ef1d6f741f302c568fe7f2b53e69e4333c5c83d3589e686c80feb199,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765930023894190560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-262069,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea9f337cf9
613b55c21a1b74e0c76d0b,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2108cbe18ef2e4cc687c754ecd34e7173cf6b37c68d3d441e41aa01b0f6b4ba3,PodSandboxId:cc00906076d954b41db3a94a7da98450d8204b4e09c237d59f3bb2e96bca3338,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765930023869582556,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-addons-262069,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0c8a349d17e38fe2a6b518411e1f43b,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ec42285-9a68-4a03-b6ab-40eed604748b name=/runtime.v1.RuntimeService/ListContainers
Dec 17 00:11:39 addons-262069 conmon[12395]: conmon 96af083870aa83e5678a <ndebug>: container PID: 12412
Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.282151423Z" level=debug msg="Received container pid: 12412" file="oci/runtime_oci.go:284" id=a60b09d6-158c-436a-8b1c-c5be80434b30 name=/runtime.v1.RuntimeService/CreateContainer
Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.297354046Z" level=info msg="Created container 96af083870aa83e5678ae1706f84ca43c468921acb244c7effa6c016aa127bcf: default/hello-world-app-5d498dc89-98t54/hello-world-app" file="server/container_create.go:491" id=a60b09d6-158c-436a-8b1c-c5be80434b30 name=/runtime.v1.RuntimeService/CreateContainer
Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.297496983Z" level=debug msg="Response: &CreateContainerResponse{ContainerId:96af083870aa83e5678ae1706f84ca43c468921acb244c7effa6c016aa127bcf,}" file="otel-collector/interceptors.go:74" id=a60b09d6-158c-436a-8b1c-c5be80434b30 name=/runtime.v1.RuntimeService/CreateContainer
Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.298254689Z" level=debug msg="Request: &StartContainerRequest{ContainerId:96af083870aa83e5678ae1706f84ca43c468921acb244c7effa6c016aa127bcf,}" file="otel-collector/interceptors.go:62" id=5dbabbd6-71d3-40c2-a558-c3753f40f9c6 name=/runtime.v1.RuntimeService/StartContainer
Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.298635939Z" level=info msg="Starting container: 96af083870aa83e5678ae1706f84ca43c468921acb244c7effa6c016aa127bcf" file="server/container_start.go:21" id=5dbabbd6-71d3-40c2-a558-c3753f40f9c6 name=/runtime.v1.RuntimeService/StartContainer
Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.317718831Z" level=info msg="Started container" PID=12412 containerID=96af083870aa83e5678ae1706f84ca43c468921acb244c7effa6c016aa127bcf description=default/hello-world-app-5d498dc89-98t54/hello-world-app file="server/container_start.go:115" id=5dbabbd6-71d3-40c2-a558-c3753f40f9c6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d299adcafa430073f3f1a037770c2f02c3f7d0156034321e47fe98b887e2c890
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
96af083870aa8 docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 Less than a second ago Running hello-world-app 0 d299adcafa430 hello-world-app-5d498dc89-98t54 default
da37136af4866 public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff 2 minutes ago Running nginx 0 95c9514edc6fd nginx default
c47e8cf37ec48 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 2 minutes ago Running busybox 0 a0c6cedad8279 busybox default
d791d8371392d registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad 3 minutes ago Running controller 0 3888987b0ab2b ingress-nginx-controller-85d4c799dd-qhfmc ingress-nginx
5a73bf3ec068e a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e 3 minutes ago Exited patch 1 9480d585d56fb ingress-nginx-admission-patch-d56md ingress-nginx
99842303bb278 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285 3 minutes ago Exited create 0 6178d285bfbd3 ingress-nginx-admission-create-nx5df ingress-nginx
d12b37b795e33 docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef 3 minutes ago Running local-path-provisioner 0 b3a444a50c80f local-path-provisioner-648f6765c9-qdlpj local-path-storage
cf166d852d17d docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 3 minutes ago Running minikube-ingress-dns 0 c4648e07535e2 kube-ingress-dns-minikube kube-system
e76d2536ddb37 docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 b6187cb2f2b25 amd-gpu-device-plugin-h7ktx kube-system
dd1fdba2b689a 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 b1304a0bf9b4b storage-provisioner kube-system
bf1e5703f653f 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 fa54351057863 coredns-66bc5c9577-225dx kube-system
1a6f4f5a400e2 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45 4 minutes ago Running kube-proxy 0 86c542fe31523 kube-proxy-pdf4s kube-system
f2fbdca13e384 a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1 4 minutes ago Running etcd 0 1b5db531bb4eb etcd-addons-262069 kube-system
67450bc656f73 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952 4 minutes ago Running kube-scheduler 0 3ea99958ef1d6 kube-scheduler-addons-262069 kube-system
375c642c900b4 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8 4 minutes ago Running kube-controller-manager 0 d1115a328d576 kube-controller-manager-addons-262069 kube-system
2108cbe18ef2e a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85 4 minutes ago Running kube-apiserver 0 cc00906076d95 kube-apiserver-addons-262069 kube-system
==> coredns [bf1e5703f653f9d5f4dbdfde28ffe80fb515d7b12142a5417d50714466645732] <==
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
[INFO] Reloading complete
[INFO] 127.0.0.1:32993 - 2111 "HINFO IN 2496638363767256317.4362979049479113296. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022927187s
[INFO] 10.244.0.23:49586 - 37635 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000517219s
[INFO] 10.244.0.23:52028 - 33310 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.003110323s
[INFO] 10.244.0.23:53126 - 40773 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000156478s
[INFO] 10.244.0.23:45334 - 26517 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000179076s
[INFO] 10.244.0.23:36825 - 22047 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000204358s
[INFO] 10.244.0.23:33288 - 65522 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000170427s
[INFO] 10.244.0.23:42587 - 17685 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005228337s
[INFO] 10.244.0.23:56158 - 10600 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.005586035s
[INFO] 10.244.0.28:37036 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00038996s
[INFO] 10.244.0.28:60967 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000184943s
==> describe nodes <==
Name: addons-262069
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-262069
kubernetes.io/os=linux
minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
minikube.k8s.io/name=addons-262069
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_17T00_07_10_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-262069
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 17 Dec 2025 00:07:06 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-262069
AcquireTime: <unset>
RenewTime: Wed, 17 Dec 2025 00:11:36 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 17 Dec 2025 00:09:43 +0000 Wed, 17 Dec 2025 00:07:04 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 17 Dec 2025 00:09:43 +0000 Wed, 17 Dec 2025 00:07:04 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 17 Dec 2025 00:09:43 +0000 Wed, 17 Dec 2025 00:07:04 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 17 Dec 2025 00:09:43 +0000 Wed, 17 Dec 2025 00:07:11 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.183
Hostname: addons-262069
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
System Info:
Machine ID: c11e3475a3334013be6a553f88d11a60
System UUID: c11e3475-a333-4013-be6a-553f88d11a60
Boot ID: d44a487a-f7ff-4581-bcd5-fa72f4800bda
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.2
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m55s
default hello-world-app-5d498dc89-98t54 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m29s
ingress-nginx ingress-nginx-controller-85d4c799dd-qhfmc 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m14s
kube-system amd-gpu-device-plugin-h7ktx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m20s
kube-system coredns-66bc5c9577-225dx 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m23s
kube-system etcd-addons-262069 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m31s
kube-system kube-apiserver-addons-262069 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m29s
kube-system kube-controller-manager-addons-262069 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m31s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m18s
kube-system kube-proxy-pdf4s 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m24s
kube-system kube-scheduler-addons-262069 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m29s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m18s
local-path-storage local-path-provisioner-648f6765c9-qdlpj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m16s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m21s kube-proxy
Normal Starting 4m30s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m29s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m29s kubelet Node addons-262069 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m29s kubelet Node addons-262069 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m29s kubelet Node addons-262069 status is now: NodeHasSufficientPID
Normal NodeReady 4m28s kubelet Node addons-262069 status is now: NodeReady
Normal RegisteredNode 4m25s node-controller Node addons-262069 event: Registered Node addons-262069 in Controller
==> dmesg <==
[ +0.490734] kauditd_printk_skb: 251 callbacks suppressed
[ +0.370480] kauditd_printk_skb: 368 callbacks suppressed
[ +8.016002] kauditd_printk_skb: 110 callbacks suppressed
[ +8.254233] kauditd_printk_skb: 11 callbacks suppressed
[ +5.861974] kauditd_printk_skb: 26 callbacks suppressed
[ +6.557112] kauditd_printk_skb: 32 callbacks suppressed
[Dec17 00:08] kauditd_printk_skb: 32 callbacks suppressed
[ +5.752699] kauditd_printk_skb: 131 callbacks suppressed
[ +3.727066] kauditd_printk_skb: 142 callbacks suppressed
[ +5.598833] kauditd_printk_skb: 90 callbacks suppressed
[ +0.000068] kauditd_printk_skb: 5 callbacks suppressed
[ +0.000124] kauditd_printk_skb: 29 callbacks suppressed
[ +5.149438] kauditd_printk_skb: 53 callbacks suppressed
[ +2.453226] kauditd_printk_skb: 47 callbacks suppressed
[ +10.743783] kauditd_printk_skb: 17 callbacks suppressed
[Dec17 00:09] kauditd_printk_skb: 22 callbacks suppressed
[ +4.668012] kauditd_printk_skb: 38 callbacks suppressed
[ +0.000111] kauditd_printk_skb: 109 callbacks suppressed
[ +1.209565] kauditd_printk_skb: 129 callbacks suppressed
[ +0.308204] kauditd_printk_skb: 128 callbacks suppressed
[ +0.306667] kauditd_printk_skb: 124 callbacks suppressed
[ +4.538301] kauditd_printk_skb: 25 callbacks suppressed
[ +5.211458] kauditd_printk_skb: 93 callbacks suppressed
[ +0.684805] kauditd_printk_skb: 78 callbacks suppressed
[Dec17 00:11] kauditd_printk_skb: 71 callbacks suppressed
==> etcd [f2fbdca13e384bcbfd3fbdaf9d95ad5967c5096ccf2372b109699fddf5e0bba5] <==
{"level":"warn","ts":"2025-12-17T00:07:41.248301Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"204.267785ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-17T00:07:41.248325Z","caller":"traceutil/trace.go:172","msg":"trace[1789477153] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:919; }","duration":"204.294573ms","start":"2025-12-17T00:07:41.044025Z","end":"2025-12-17T00:07:41.248320Z","steps":["trace[1789477153] 'agreement among raft nodes before linearized reading' (duration: 204.251767ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T00:07:43.518170Z","caller":"traceutil/trace.go:172","msg":"trace[870624884] transaction","detail":"{read_only:false; response_revision:922; number_of_response:1; }","duration":"211.032364ms","start":"2025-12-17T00:07:43.307124Z","end":"2025-12-17T00:07:43.518156Z","steps":["trace[870624884] 'process raft request' (duration: 210.683554ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-17T00:07:44.901478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45412","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-17T00:07:44.950594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45428","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-17T00:07:44.987938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45448","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-12-17T00:07:45.011463Z","caller":"traceutil/trace.go:172","msg":"trace[1211641140] transaction","detail":"{read_only:false; response_revision:923; number_of_response:1; }","duration":"199.184083ms","start":"2025-12-17T00:07:44.812267Z","end":"2025-12-17T00:07:45.011451Z","steps":["trace[1211641140] 'process raft request' (duration: 199.06416ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-17T00:07:45.050200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45470","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-12-17T00:07:55.347228Z","caller":"traceutil/trace.go:172","msg":"trace[1250803468] transaction","detail":"{read_only:false; response_revision:955; number_of_response:1; }","duration":"156.29914ms","start":"2025-12-17T00:07:55.190916Z","end":"2025-12-17T00:07:55.347215Z","steps":["trace[1250803468] 'process raft request' (duration: 156.19955ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T00:07:59.732386Z","caller":"traceutil/trace.go:172","msg":"trace[633269512] transaction","detail":"{read_only:false; response_revision:974; number_of_response:1; }","duration":"171.300091ms","start":"2025-12-17T00:07:59.561074Z","end":"2025-12-17T00:07:59.732374Z","steps":["trace[633269512] 'process raft request' (duration: 171.112972ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T00:07:59.742239Z","caller":"traceutil/trace.go:172","msg":"trace[1570570202] transaction","detail":"{read_only:false; response_revision:975; number_of_response:1; }","duration":"160.705473ms","start":"2025-12-17T00:07:59.581522Z","end":"2025-12-17T00:07:59.742227Z","steps":["trace[1570570202] 'process raft request' (duration: 160.621537ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T00:08:03.927302Z","caller":"traceutil/trace.go:172","msg":"trace[828096755] transaction","detail":"{read_only:false; response_revision:992; number_of_response:1; }","duration":"118.709888ms","start":"2025-12-17T00:08:03.808393Z","end":"2025-12-17T00:08:03.927103Z","steps":["trace[828096755] 'process raft request' (duration: 118.324583ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T00:08:26.347398Z","caller":"traceutil/trace.go:172","msg":"trace[625907743] transaction","detail":"{read_only:false; response_revision:1132; number_of_response:1; }","duration":"179.798685ms","start":"2025-12-17T00:08:26.167580Z","end":"2025-12-17T00:08:26.347379Z","steps":["trace[625907743] 'process raft request' (duration: 179.704746ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T00:08:44.267267Z","caller":"traceutil/trace.go:172","msg":"trace[178099999] transaction","detail":"{read_only:false; response_revision:1188; number_of_response:1; }","duration":"147.01678ms","start":"2025-12-17T00:08:44.120238Z","end":"2025-12-17T00:08:44.267255Z","steps":["trace[178099999] 'process raft request' (duration: 146.90828ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T00:09:08.972954Z","caller":"traceutil/trace.go:172","msg":"trace[1268266188] linearizableReadLoop","detail":"{readStateIndex:1377; appliedIndex:1377; }","duration":"236.606952ms","start":"2025-12-17T00:09:08.736318Z","end":"2025-12-17T00:09:08.972925Z","steps":["trace[1268266188] 'read index received' (duration: 236.600783ms)","trace[1268266188] 'applied index is now lower than readState.Index' (duration: 5.234µs)"],"step_count":2}
{"level":"info","ts":"2025-12-17T00:09:08.973162Z","caller":"traceutil/trace.go:172","msg":"trace[533549602] transaction","detail":"{read_only:false; response_revision:1335; number_of_response:1; }","duration":"285.016673ms","start":"2025-12-17T00:09:08.688115Z","end":"2025-12-17T00:09:08.973132Z","steps":["trace[533549602] 'process raft request' (duration: 284.896937ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-17T00:09:08.973309Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"236.938261ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" limit:1 ","response":"range_response_count:1 size:822"}
{"level":"info","ts":"2025-12-17T00:09:08.973335Z","caller":"traceutil/trace.go:172","msg":"trace[1937506425] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1335; }","duration":"237.017925ms","start":"2025-12-17T00:09:08.736312Z","end":"2025-12-17T00:09:08.973330Z","steps":["trace[1937506425] 'agreement among raft nodes before linearized reading' (duration: 236.852255ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T00:09:08.976342Z","caller":"traceutil/trace.go:172","msg":"trace[42840466] transaction","detail":"{read_only:false; response_revision:1336; number_of_response:1; }","duration":"211.286948ms","start":"2025-12-17T00:09:08.765043Z","end":"2025-12-17T00:09:08.976330Z","steps":["trace[42840466] 'process raft request' (duration: 210.530419ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T00:09:15.629195Z","caller":"traceutil/trace.go:172","msg":"trace[1783732182] linearizableReadLoop","detail":"{readStateIndex:1460; appliedIndex:1460; }","duration":"124.261047ms","start":"2025-12-17T00:09:15.504918Z","end":"2025-12-17T00:09:15.629179Z","steps":["trace[1783732182] 'read index received' (duration: 124.255347ms)","trace[1783732182] 'applied index is now lower than readState.Index' (duration: 5.128µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-17T00:09:15.629368Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.42403ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-17T00:09:15.629394Z","caller":"traceutil/trace.go:172","msg":"trace[1675442169] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1414; }","duration":"124.47451ms","start":"2025-12-17T00:09:15.504914Z","end":"2025-12-17T00:09:15.629388Z","steps":["trace[1675442169] 'agreement among raft nodes before linearized reading' (duration: 124.393254ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T00:09:15.629725Z","caller":"traceutil/trace.go:172","msg":"trace[1851335785] transaction","detail":"{read_only:false; response_revision:1415; number_of_response:1; }","duration":"211.627983ms","start":"2025-12-17T00:09:15.418086Z","end":"2025-12-17T00:09:15.629714Z","steps":["trace[1851335785] 'process raft request' (duration: 211.51416ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T00:09:20.573530Z","caller":"traceutil/trace.go:172","msg":"trace[360089175] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1464; }","duration":"147.845395ms","start":"2025-12-17T00:09:20.425672Z","end":"2025-12-17T00:09:20.573517Z","steps":["trace[360089175] 'process raft request' (duration: 147.402934ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T00:09:20.578788Z","caller":"traceutil/trace.go:172","msg":"trace[1569039063] transaction","detail":"{read_only:false; response_revision:1465; number_of_response:1; }","duration":"103.462518ms","start":"2025-12-17T00:09:20.475314Z","end":"2025-12-17T00:09:20.578777Z","steps":["trace[1569039063] 'process raft request' (duration: 103.256919ms)"],"step_count":1}
==> kernel <==
00:11:39 up 5 min, 0 users, load average: 0.89, 1.63, 0.82
Linux addons-262069 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 03:41:16 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [2108cbe18ef2e4cc687c754ecd34e7173cf6b37c68d3d441e41aa01b0f6b4ba3] <==
Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I1217 00:08:02.511937 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1217 00:08:54.836619 1 conn.go:339] Error on socket receive: read tcp 192.168.39.183:8443->192.168.39.1:46146: use of closed network connection
E1217 00:08:55.095847 1 conn.go:339] Error on socket receive: read tcp 192.168.39.183:8443->192.168.39.1:46178: use of closed network connection
I1217 00:09:04.410386 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.8.95"}
I1217 00:09:10.143227 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1217 00:09:10.389337 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.199.61"}
I1217 00:09:21.949277 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I1217 00:09:48.897162 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1217 00:09:48.897222 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1217 00:09:48.950537 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1217 00:09:48.950575 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1217 00:09:48.982783 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1217 00:09:48.982892 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1217 00:09:49.009460 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1217 00:09:49.009806 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1217 00:09:49.032658 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1217 00:09:49.034541 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1217 00:09:50.009521 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1217 00:09:50.032878 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
W1217 00:09:50.052208 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
I1217 00:10:03.484790 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1217 00:11:37.923884 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.91.62"}
==> kube-controller-manager [375c642c900b45d46f1a83108aa9915adf2f8a5967893585a022990a60789ab1] <==
E1217 00:09:59.432096 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 00:10:06.285681 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 00:10:06.286915 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 00:10:09.660841 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 00:10:09.662100 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 00:10:10.225713 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 00:10:10.226978 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
I1217 00:10:16.106247 1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
I1217 00:10:16.106367 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1217 00:10:16.119946 1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
I1217 00:10:16.120037 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
E1217 00:10:26.173494 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 00:10:26.174881 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 00:10:32.166765 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 00:10:32.168534 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 00:10:32.464493 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 00:10:32.465829 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 00:10:54.920675 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 00:10:54.921677 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 00:11:05.906154 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 00:11:05.907342 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 00:11:08.365779 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 00:11:08.367145 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 00:11:38.002098 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 00:11:38.003979 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [1a6f4f5a400e23f398e0aab5335420b4e49cdde8aa1f8aa33525397d22505556] <==
I1217 00:07:17.538277 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1217 00:07:17.639224 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1217 00:07:17.639292 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.183"]
E1217 00:07:17.639376 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1217 00:07:17.888345 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1217 00:07:17.888482 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1217 00:07:17.888514 1 server_linux.go:132] "Using iptables Proxier"
I1217 00:07:17.939148 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1217 00:07:17.972084 1 server.go:527] "Version info" version="v1.34.2"
I1217 00:07:17.973141 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1217 00:07:17.994438 1 config.go:200] "Starting service config controller"
I1217 00:07:17.994471 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1217 00:07:18.009344 1 config.go:403] "Starting serviceCIDR config controller"
I1217 00:07:18.009379 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1217 00:07:18.019518 1 config.go:106] "Starting endpoint slice config controller"
I1217 00:07:18.019544 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1217 00:07:18.024649 1 config.go:309] "Starting node config controller"
I1217 00:07:18.024677 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1217 00:07:18.024685 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1217 00:07:18.094734 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1217 00:07:18.109493 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1217 00:07:18.120488 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [67450bc656f73dd9235124553c7b9a80e9f2e5403b09204044ae68765e6cdd43] <==
I1217 00:07:06.910488 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
E1217 00:07:06.921661 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1217 00:07:06.922970 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1217 00:07:06.923277 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1217 00:07:06.923621 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1217 00:07:06.923758 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1217 00:07:07.761322 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1217 00:07:07.780271 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1217 00:07:07.823200 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1217 00:07:07.829033 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1217 00:07:07.844191 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1217 00:07:07.844544 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1217 00:07:07.873929 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1217 00:07:07.896041 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1217 00:07:07.916557 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1217 00:07:07.947708 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1217 00:07:08.008636 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1217 00:07:08.059886 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1217 00:07:08.070234 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1217 00:07:08.143105 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1217 00:07:08.166079 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1217 00:07:08.178854 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1217 00:07:08.346275 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1217 00:07:08.397299 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
I1217 00:07:10.414867 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Dec 17 00:10:10 addons-262069 kubelet[1517]: E1217 00:10:10.332452 1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765930210331597822 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 17 00:10:10 addons-262069 kubelet[1517]: I1217 00:10:10.890843 1517 scope.go:117] "RemoveContainer" containerID="78245a5420d8fa0b275c5f14ee3e75b7270143c9c09cd5c50c30b40c4b12186b"
Dec 17 00:10:11 addons-262069 kubelet[1517]: I1217 00:10:11.012348 1517 scope.go:117] "RemoveContainer" containerID="83a58e432b5bf13cea6a5479cfea58185824d3be99f5929902f17f1b0998fdec"
Dec 17 00:10:11 addons-262069 kubelet[1517]: I1217 00:10:11.136159 1517 scope.go:117] "RemoveContainer" containerID="dca63ea6f9078b26c9bf53e0b061560f42747a5396684bf67075852ac056e440"
Dec 17 00:10:17 addons-262069 kubelet[1517]: I1217 00:10:17.004501 1517 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-h7ktx" secret="" err="secret \"gcp-auth\" not found"
Dec 17 00:10:20 addons-262069 kubelet[1517]: E1217 00:10:20.335485 1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765930220334813484 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 17 00:10:20 addons-262069 kubelet[1517]: E1217 00:10:20.335511 1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765930220334813484 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 17 00:10:28 addons-262069 kubelet[1517]: I1217 00:10:28.004663 1517 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-225dx" secret="" err="secret \"gcp-auth\" not found"
Dec 17 00:10:30 addons-262069 kubelet[1517]: E1217 00:10:30.338658 1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765930230338218971 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 17 00:10:30 addons-262069 kubelet[1517]: E1217 00:10:30.338701 1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765930230338218971 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 17 00:10:40 addons-262069 kubelet[1517]: E1217 00:10:40.341691 1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765930240341122443 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 17 00:10:40 addons-262069 kubelet[1517]: E1217 00:10:40.341731 1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765930240341122443 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 17 00:10:50 addons-262069 kubelet[1517]: E1217 00:10:50.344878 1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765930250344375260 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 17 00:10:50 addons-262069 kubelet[1517]: E1217 00:10:50.344908 1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765930250344375260 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 17 00:11:00 addons-262069 kubelet[1517]: E1217 00:11:00.348287 1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765930260347749150 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 17 00:11:00 addons-262069 kubelet[1517]: E1217 00:11:00.348321 1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765930260347749150 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 17 00:11:10 addons-262069 kubelet[1517]: E1217 00:11:10.352401 1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765930270351470475 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 17 00:11:10 addons-262069 kubelet[1517]: E1217 00:11:10.352449 1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765930270351470475 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 17 00:11:20 addons-262069 kubelet[1517]: E1217 00:11:20.355142 1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765930280354451370 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 17 00:11:20 addons-262069 kubelet[1517]: E1217 00:11:20.355179 1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765930280354451370 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 17 00:11:26 addons-262069 kubelet[1517]: I1217 00:11:26.005127 1517 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Dec 17 00:11:30 addons-262069 kubelet[1517]: E1217 00:11:30.359194 1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765930290358712964 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 17 00:11:30 addons-262069 kubelet[1517]: E1217 00:11:30.359237 1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765930290358712964 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 17 00:11:35 addons-262069 kubelet[1517]: I1217 00:11:35.004805 1517 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-225dx" secret="" err="secret \"gcp-auth\" not found"
Dec 17 00:11:37 addons-262069 kubelet[1517]: I1217 00:11:37.951334 1517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnmq5\" (UniqueName: \"kubernetes.io/projected/5d5f0ee3-96d7-4fd9-a8f1-c32bda978dc4-kube-api-access-tnmq5\") pod \"hello-world-app-5d498dc89-98t54\" (UID: \"5d5f0ee3-96d7-4fd9-a8f1-c32bda978dc4\") " pod="default/hello-world-app-5d498dc89-98t54"
==> storage-provisioner [dd1fdba2b689a084415531f7474442db44470fbe88cccf6cc431a5d63e3e0f4e] <==
W1217 00:11:15.812685 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:17.816616 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:17.824859 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:19.829438 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:19.835224 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:21.839136 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:21.848226 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:23.851787 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:23.858319 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:25.862346 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:25.870452 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:27.875288 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:27.881737 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:29.886662 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:29.895153 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:31.898973 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:31.905643 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:33.909343 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:33.917192 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:35.920322 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:35.927383 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:37.938397 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:37.956145 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:39.960620 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 00:11:39.970229 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-262069 -n addons-262069
helpers_test.go:270: (dbg) Run: kubectl --context addons-262069 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-nx5df ingress-nginx-admission-patch-d56md
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run: kubectl --context addons-262069 describe pod ingress-nginx-admission-create-nx5df ingress-nginx-admission-patch-d56md
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-262069 describe pod ingress-nginx-admission-create-nx5df ingress-nginx-admission-patch-d56md: exit status 1 (63.898644ms)
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-nx5df" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-d56md" not found
** /stderr **
helpers_test.go:288: kubectl --context addons-262069 describe pod ingress-nginx-admission-create-nx5df ingress-nginx-admission-patch-d56md: exit status 1
addons_test.go:1055: (dbg) Run: out/minikube-linux-amd64 -p addons-262069 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-262069 addons disable ingress-dns --alsologtostderr -v=1: (1.346333159s)
addons_test.go:1055: (dbg) Run: out/minikube-linux-amd64 -p addons-262069 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-262069 addons disable ingress --alsologtostderr -v=1: (7.775954082s)
--- FAIL: TestAddons/parallel/Ingress (159.80s)