=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run: kubectl --context addons-198878 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run: kubectl --context addons-198878 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run: kubectl --context addons-198878 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [cb70cb39-5ff1-4d2b-b014-86048256ca26] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [cb70cb39-5ff1-4d2b-b014-86048256ca26] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003302496s
I1126 19:38:09.144494 11003 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run: out/minikube-linux-amd64 -p addons-198878 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-198878 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.829194068s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run: kubectl --context addons-198878 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run: out/minikube-linux-amd64 -p addons-198878 ip
addons_test.go:299: (dbg) Run: nslookup hello-john.test 192.168.39.123
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-198878 -n addons-198878
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p addons-198878 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-198878 logs -n 25: (1.377353881s)
helpers_test.go:260: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-499024 │ download-only-499024 │ jenkins │ v1.37.0 │ 26 Nov 25 19:35 UTC │ 26 Nov 25 19:35 UTC │
│ start │ --download-only -p binary-mirror-630783 --alsologtostderr --binary-mirror http://127.0.0.1:33899 --driver=kvm2 --container-runtime=crio │ binary-mirror-630783 │ jenkins │ v1.37.0 │ 26 Nov 25 19:35 UTC │ │
│ delete │ -p binary-mirror-630783 │ binary-mirror-630783 │ jenkins │ v1.37.0 │ 26 Nov 25 19:35 UTC │ 26 Nov 25 19:35 UTC │
│ addons │ enable dashboard -p addons-198878 │ addons-198878 │ jenkins │ v1.37.0 │ 26 Nov 25 19:35 UTC │ │
│ addons │ disable dashboard -p addons-198878 │ addons-198878 │ jenkins │ v1.37.0 │ 26 Nov 25 19:35 UTC │ │
│ start │ -p addons-198878 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-198878 │ jenkins │ v1.37.0 │ 26 Nov 25 19:35 UTC │ 26 Nov 25 19:37 UTC │
│ addons │ addons-198878 addons disable volcano --alsologtostderr -v=1 │ addons-198878 │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
│ addons │ addons-198878 addons disable gcp-auth --alsologtostderr -v=1 │ addons-198878 │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
│ addons │ enable headlamp -p addons-198878 --alsologtostderr -v=1 │ addons-198878 │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
│ addons │ addons-198878 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-198878 │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
│ ssh │ addons-198878 ssh cat /opt/local-path-provisioner/pvc-a22e263c-d92b-4e58-83ac-82f62be484b9_default_test-pvc/file1 │ addons-198878 │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
│ addons │ addons-198878 addons disable yakd --alsologtostderr -v=1 │ addons-198878 │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
│ addons │ addons-198878 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-198878 │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
│ addons │ addons-198878 addons disable headlamp --alsologtostderr -v=1 │ addons-198878 │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:38 UTC │
│ addons │ addons-198878 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-198878 │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
│ ip │ addons-198878 ip │ addons-198878 │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
│ addons │ addons-198878 addons disable registry --alsologtostderr -v=1 │ addons-198878 │ jenkins │ v1.37.0 │ 26 Nov 25 19:37 UTC │ 26 Nov 25 19:37 UTC │
│ addons │ addons-198878 addons disable metrics-server --alsologtostderr -v=1 │ addons-198878 │ jenkins │ v1.37.0 │ 26 Nov 25 19:38 UTC │ 26 Nov 25 19:38 UTC │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-198878 │ addons-198878 │ jenkins │ v1.37.0 │ 26 Nov 25 19:38 UTC │ 26 Nov 25 19:38 UTC │
│ addons │ addons-198878 addons disable registry-creds --alsologtostderr -v=1 │ addons-198878 │ jenkins │ v1.37.0 │ 26 Nov 25 19:38 UTC │ 26 Nov 25 19:38 UTC │
│ addons │ addons-198878 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-198878 │ jenkins │ v1.37.0 │ 26 Nov 25 19:38 UTC │ 26 Nov 25 19:38 UTC │
│ ssh │ addons-198878 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-198878 │ jenkins │ v1.37.0 │ 26 Nov 25 19:38 UTC │ │
│ addons │ addons-198878 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-198878 │ jenkins │ v1.37.0 │ 26 Nov 25 19:38 UTC │ 26 Nov 25 19:38 UTC │
│ addons │ addons-198878 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-198878 │ jenkins │ v1.37.0 │ 26 Nov 25 19:38 UTC │ 26 Nov 25 19:38 UTC │
│ ip │ addons-198878 ip │ addons-198878 │ jenkins │ v1.37.0 │ 26 Nov 25 19:40 UTC │ 26 Nov 25 19:40 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/11/26 19:35:08
Running on machine: ubuntu-20-agent-13
Binary: Built with gc go1.25.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1126 19:35:08.349724 11611 out.go:360] Setting OutFile to fd 1 ...
I1126 19:35:08.349923 11611 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:35:08.349931 11611 out.go:374] Setting ErrFile to fd 2...
I1126 19:35:08.349936 11611 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:35:08.350142 11611 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-7091/.minikube/bin
I1126 19:35:08.350593 11611 out.go:368] Setting JSON to false
I1126 19:35:08.351364 11611 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1058,"bootTime":1764184650,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1126 19:35:08.351411 11611 start.go:143] virtualization: kvm guest
I1126 19:35:08.353165 11611 out.go:179] * [addons-198878] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1126 19:35:08.354407 11611 out.go:179] - MINIKUBE_LOCATION=21974
I1126 19:35:08.354426 11611 notify.go:221] Checking for updates...
I1126 19:35:08.356651 11611 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1126 19:35:08.357904 11611 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21974-7091/kubeconfig
I1126 19:35:08.359170 11611 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-7091/.minikube
I1126 19:35:08.360267 11611 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1126 19:35:08.361509 11611 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1126 19:35:08.362885 11611 driver.go:422] Setting default libvirt URI to qemu:///system
I1126 19:35:08.394162 11611 out.go:179] * Using the kvm2 driver based on user configuration
I1126 19:35:08.395419 11611 start.go:309] selected driver: kvm2
I1126 19:35:08.395443 11611 start.go:927] validating driver "kvm2" against <nil>
I1126 19:35:08.395455 11611 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1126 19:35:08.396129 11611 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1126 19:35:08.396818 11611 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1126 19:35:08.396847 11611 cni.go:84] Creating CNI manager for ""
I1126 19:35:08.396895 11611 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1126 19:35:08.396908 11611 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1126 19:35:08.396947 11611 start.go:353] cluster config:
{Name:addons-198878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-198878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1126 19:35:08.397092 11611 iso.go:125] acquiring lock: {Name:mkfe3dbb7c1a56d5a5080a4e71d079899ad19ff3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1126 19:35:08.398596 11611 out.go:179] * Starting "addons-198878" primary control-plane node in "addons-198878" cluster
I1126 19:35:08.399754 11611 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1126 19:35:08.399777 11611 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-7091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
I1126 19:35:08.399783 11611 cache.go:65] Caching tarball of preloaded images
I1126 19:35:08.399845 11611 preload.go:238] Found /home/jenkins/minikube-integration/21974-7091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1126 19:35:08.399855 11611 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
I1126 19:35:08.400171 11611 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/config.json ...
I1126 19:35:08.400194 11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/config.json: {Name:mke50fba2276487ff37a4cbe33afee7969a252fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1126 19:35:08.400346 11611 start.go:360] acquireMachinesLock for addons-198878: {Name:mk682108a3404f6d853d2e6b676abccdb6a57902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1126 19:35:08.400415 11611 start.go:364] duration metric: took 52.23µs to acquireMachinesLock for "addons-198878"
I1126 19:35:08.400439 11611 start.go:93] Provisioning new machine with config: &{Name:addons-198878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-198878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1126 19:35:08.400485 11611 start.go:125] createHost starting for "" (driver="kvm2")
I1126 19:35:08.402079 11611 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1126 19:35:08.402259 11611 start.go:159] libmachine.API.Create for "addons-198878" (driver="kvm2")
I1126 19:35:08.402294 11611 client.go:173] LocalClient.Create starting
I1126 19:35:08.402394 11611 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca.pem
I1126 19:35:08.503600 11611 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/cert.pem
I1126 19:35:08.575826 11611 main.go:143] libmachine: creating domain...
I1126 19:35:08.575849 11611 main.go:143] libmachine: creating network...
I1126 19:35:08.577227 11611 main.go:143] libmachine: found existing default network
I1126 19:35:08.577413 11611 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1126 19:35:08.577942 11611 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c035e0}
I1126 19:35:08.578050 11611 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-198878</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1126 19:35:08.583875 11611 main.go:143] libmachine: creating private network mk-addons-198878 192.168.39.0/24...
I1126 19:35:08.653847 11611 main.go:143] libmachine: private network mk-addons-198878 192.168.39.0/24 created
I1126 19:35:08.654176 11611 main.go:143] libmachine: <network>
<name>mk-addons-198878</name>
<uuid>814dc8f9-7f03-4085-9b4b-191d5f733f4b</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:96:2f:0c'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1126 19:35:08.654211 11611 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878 ...
I1126 19:35:08.654229 11611 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21974-7091/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
I1126 19:35:08.654238 11611 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21974-7091/.minikube
I1126 19:35:08.654294 11611 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21974-7091/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21974-7091/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
I1126 19:35:08.910353 11611 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa...
I1126 19:35:09.043638 11611 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/addons-198878.rawdisk...
I1126 19:35:09.043677 11611 main.go:143] libmachine: Writing magic tar header
I1126 19:35:09.043696 11611 main.go:143] libmachine: Writing SSH key tar header
I1126 19:35:09.043772 11611 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878 ...
I1126 19:35:09.043826 11611 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878
I1126 19:35:09.043856 11611 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878 (perms=drwx------)
I1126 19:35:09.043872 11611 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21974-7091/.minikube/machines
I1126 19:35:09.043881 11611 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21974-7091/.minikube/machines (perms=drwxr-xr-x)
I1126 19:35:09.043891 11611 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21974-7091/.minikube
I1126 19:35:09.043902 11611 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21974-7091/.minikube (perms=drwxr-xr-x)
I1126 19:35:09.043910 11611 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21974-7091
I1126 19:35:09.043924 11611 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21974-7091 (perms=drwxrwxr-x)
I1126 19:35:09.043934 11611 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1126 19:35:09.043941 11611 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1126 19:35:09.043953 11611 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1126 19:35:09.043960 11611 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1126 19:35:09.043971 11611 main.go:143] libmachine: checking permissions on dir: /home
I1126 19:35:09.043977 11611 main.go:143] libmachine: skipping /home - not owner
I1126 19:35:09.043981 11611 main.go:143] libmachine: defining domain...
I1126 19:35:09.045340 11611 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-198878</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/addons-198878.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-198878'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1126 19:35:09.053280 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:7c:fe:25 in network default
I1126 19:35:09.053994 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:09.054014 11611 main.go:143] libmachine: starting domain...
I1126 19:35:09.054018 11611 main.go:143] libmachine: ensuring networks are active...
I1126 19:35:09.054901 11611 main.go:143] libmachine: Ensuring network default is active
I1126 19:35:09.055352 11611 main.go:143] libmachine: Ensuring network mk-addons-198878 is active
I1126 19:35:09.055972 11611 main.go:143] libmachine: getting domain XML...
I1126 19:35:09.056938 11611 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-198878</name>
<uuid>3a31c91d-5706-460a-9959-5cc9b1ab6144</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/addons-198878.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:39:0c:6e'/>
<source network='mk-addons-198878'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:7c:fe:25'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1126 19:35:10.341247 11611 main.go:143] libmachine: waiting for domain to start...
I1126 19:35:10.342782 11611 main.go:143] libmachine: domain is now running
I1126 19:35:10.342801 11611 main.go:143] libmachine: waiting for IP...
I1126 19:35:10.343514 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:10.344033 11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
I1126 19:35:10.344044 11611 main.go:143] libmachine: trying to list again with source=arp
I1126 19:35:10.344324 11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
I1126 19:35:10.344361 11611 retry.go:31] will retry after 266.829865ms: waiting for domain to come up
I1126 19:35:10.612957 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:10.613557 11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
I1126 19:35:10.613571 11611 main.go:143] libmachine: trying to list again with source=arp
I1126 19:35:10.613866 11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
I1126 19:35:10.613920 11611 retry.go:31] will retry after 336.441283ms: waiting for domain to come up
I1126 19:35:10.951753 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:10.952376 11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
I1126 19:35:10.952398 11611 main.go:143] libmachine: trying to list again with source=arp
I1126 19:35:10.952691 11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
I1126 19:35:10.952721 11611 retry.go:31] will retry after 322.116478ms: waiting for domain to come up
I1126 19:35:11.276471 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:11.277110 11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
I1126 19:35:11.277130 11611 main.go:143] libmachine: trying to list again with source=arp
I1126 19:35:11.277459 11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
I1126 19:35:11.277501 11611 retry.go:31] will retry after 473.430506ms: waiting for domain to come up
I1126 19:35:11.752063 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:11.752553 11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
I1126 19:35:11.752570 11611 main.go:143] libmachine: trying to list again with source=arp
I1126 19:35:11.752856 11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
I1126 19:35:11.752890 11611 retry.go:31] will retry after 744.319165ms: waiting for domain to come up
I1126 19:35:12.498775 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:12.499302 11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
I1126 19:35:12.499318 11611 main.go:143] libmachine: trying to list again with source=arp
I1126 19:35:12.499634 11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
I1126 19:35:12.499662 11611 retry.go:31] will retry after 878.2162ms: waiting for domain to come up
I1126 19:35:13.379060 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:13.379618 11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
I1126 19:35:13.379638 11611 main.go:143] libmachine: trying to list again with source=arp
I1126 19:35:13.380041 11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
I1126 19:35:13.380104 11611 retry.go:31] will retry after 804.696615ms: waiting for domain to come up
I1126 19:35:14.185922 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:14.186436 11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
I1126 19:35:14.186454 11611 main.go:143] libmachine: trying to list again with source=arp
I1126 19:35:14.186793 11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
I1126 19:35:14.186829 11611 retry.go:31] will retry after 1.418235708s: waiting for domain to come up
I1126 19:35:15.606226 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:15.606752 11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
I1126 19:35:15.606784 11611 main.go:143] libmachine: trying to list again with source=arp
I1126 19:35:15.607186 11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
I1126 19:35:15.607221 11611 retry.go:31] will retry after 1.574841792s: waiting for domain to come up
I1126 19:35:17.184011 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:17.184520 11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
I1126 19:35:17.184533 11611 main.go:143] libmachine: trying to list again with source=arp
I1126 19:35:17.184852 11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
I1126 19:35:17.184881 11611 retry.go:31] will retry after 1.833984055s: waiting for domain to come up
I1126 19:35:19.020196 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:19.020728 11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
I1126 19:35:19.020744 11611 main.go:143] libmachine: trying to list again with source=arp
I1126 19:35:19.021112 11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
I1126 19:35:19.021148 11611 retry.go:31] will retry after 2.745043916s: waiting for domain to come up
I1126 19:35:21.770218 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:21.770828 11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
I1126 19:35:21.770848 11611 main.go:143] libmachine: trying to list again with source=arp
I1126 19:35:21.771186 11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
I1126 19:35:21.771225 11611 retry.go:31] will retry after 2.194652937s: waiting for domain to come up
I1126 19:35:23.967573 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:23.968013 11611 main.go:143] libmachine: no network interface addresses found for domain addons-198878 (source=lease)
I1126 19:35:23.968027 11611 main.go:143] libmachine: trying to list again with source=arp
I1126 19:35:23.968254 11611 main.go:143] libmachine: unable to find current IP address of domain addons-198878 in network mk-addons-198878 (interfaces detected: [])
I1126 19:35:23.968281 11611 retry.go:31] will retry after 3.679292601s: waiting for domain to come up
I1126 19:35:27.652134 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:27.652711 11611 main.go:143] libmachine: domain addons-198878 has current primary IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:27.652725 11611 main.go:143] libmachine: found domain IP: 192.168.39.123
I1126 19:35:27.652731 11611 main.go:143] libmachine: reserving static IP address...
I1126 19:35:27.653225 11611 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-198878", mac: "52:54:00:39:0c:6e", ip: "192.168.39.123"} in network mk-addons-198878
I1126 19:35:27.845227 11611 main.go:143] libmachine: reserved static IP address 192.168.39.123 for domain addons-198878
I1126 19:35:27.845253 11611 main.go:143] libmachine: waiting for SSH...
I1126 19:35:27.845271 11611 main.go:143] libmachine: Getting to WaitForSSH function...
I1126 19:35:27.847765 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:27.848065 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:minikube Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:27.848134 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:27.848318 11611 main.go:143] libmachine: Using SSH client type: native
I1126 19:35:27.848571 11611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.123 22 <nil> <nil>}
I1126 19:35:27.848583 11611 main.go:143] libmachine: About to run SSH command:
exit 0
I1126 19:35:27.968887 11611 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1126 19:35:27.969281 11611 main.go:143] libmachine: domain creation complete
I1126 19:35:27.970743 11611 machine.go:94] provisionDockerMachine start ...
I1126 19:35:27.973294 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:27.973696 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:27.973726 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:27.973903 11611 main.go:143] libmachine: Using SSH client type: native
I1126 19:35:27.974170 11611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.123 22 <nil> <nil>}
I1126 19:35:27.974182 11611 main.go:143] libmachine: About to run SSH command:
hostname
I1126 19:35:28.094548 11611 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1126 19:35:28.094578 11611 buildroot.go:166] provisioning hostname "addons-198878"
I1126 19:35:28.097497 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:28.097952 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:28.097972 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:28.098140 11611 main.go:143] libmachine: Using SSH client type: native
I1126 19:35:28.098327 11611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.123 22 <nil> <nil>}
I1126 19:35:28.098340 11611 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-198878 && echo "addons-198878" | sudo tee /etc/hostname
I1126 19:35:28.237328 11611 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-198878
I1126 19:35:28.240263 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:28.240717 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:28.240741 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:28.240871 11611 main.go:143] libmachine: Using SSH client type: native
I1126 19:35:28.241057 11611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.123 22 <nil> <nil>}
I1126 19:35:28.241073 11611 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-198878' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-198878/g' /etc/hosts;
else
echo '127.0.1.1 addons-198878' | sudo tee -a /etc/hosts;
fi
fi
I1126 19:35:28.370219 11611 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1126 19:35:28.370246 11611 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21974-7091/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-7091/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-7091/.minikube}
I1126 19:35:28.370261 11611 buildroot.go:174] setting up certificates
I1126 19:35:28.370270 11611 provision.go:84] configureAuth start
I1126 19:35:28.373211 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:28.373577 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:28.373616 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:28.375695 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:28.376060 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:28.376105 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:28.376229 11611 provision.go:143] copyHostCerts
I1126 19:35:28.376301 11611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-7091/.minikube/ca.pem (1082 bytes)
I1126 19:35:28.424072 11611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-7091/.minikube/cert.pem (1123 bytes)
I1126 19:35:28.424262 11611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-7091/.minikube/key.pem (1675 bytes)
I1126 19:35:28.424343 11611 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-7091/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca-key.pem org=jenkins.addons-198878 san=[127.0.0.1 192.168.39.123 addons-198878 localhost minikube]
I1126 19:35:28.470104 11611 provision.go:177] copyRemoteCerts
I1126 19:35:28.470169 11611 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1126 19:35:28.472606 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:28.472945 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:28.472965 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:28.473106 11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
I1126 19:35:28.564818 11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1126 19:35:28.596338 11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1126 19:35:28.626433 11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1126 19:35:28.657607 11611 provision.go:87] duration metric: took 287.301255ms to configureAuth
I1126 19:35:28.657641 11611 buildroot.go:189] setting minikube options for container-runtime
I1126 19:35:28.657823 11611 config.go:182] Loaded profile config "addons-198878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:35:28.660409 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:28.660807 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:28.660830 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:28.660977 11611 main.go:143] libmachine: Using SSH client type: native
I1126 19:35:28.661175 11611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.123 22 <nil> <nil>}
I1126 19:35:28.661189 11611 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1126 19:35:29.094992 11611 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1126 19:35:29.095014 11611 machine.go:97] duration metric: took 1.124253596s to provisionDockerMachine
I1126 19:35:29.095026 11611 client.go:176] duration metric: took 20.692722921s to LocalClient.Create
I1126 19:35:29.095036 11611 start.go:167] duration metric: took 20.69277747s to libmachine.API.Create "addons-198878"
I1126 19:35:29.095042 11611 start.go:293] postStartSetup for "addons-198878" (driver="kvm2")
I1126 19:35:29.095050 11611 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1126 19:35:29.095143 11611 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1126 19:35:29.098441 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:29.098859 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:29.098883 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:29.099043 11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
I1126 19:35:29.197291 11611 ssh_runner.go:195] Run: cat /etc/os-release
I1126 19:35:29.203460 11611 info.go:137] Remote host: Buildroot 2025.02
I1126 19:35:29.203491 11611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-7091/.minikube/addons for local assets ...
I1126 19:35:29.203573 11611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-7091/.minikube/files for local assets ...
I1126 19:35:29.203609 11611 start.go:296] duration metric: took 108.560809ms for postStartSetup
I1126 19:35:29.263806 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:29.264316 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:29.264349 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:29.264567 11611 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/config.json ...
I1126 19:35:29.264746 11611 start.go:128] duration metric: took 20.864251471s to createHost
I1126 19:35:29.266967 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:29.267341 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:29.267364 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:29.267500 11611 main.go:143] libmachine: Using SSH client type: native
I1126 19:35:29.267698 11611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.123 22 <nil> <nil>}
I1126 19:35:29.267708 11611 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1126 19:35:29.385893 11611 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764185729.349930091
I1126 19:35:29.385914 11611 fix.go:216] guest clock: 1764185729.349930091
I1126 19:35:29.385924 11611 fix.go:229] Guest: 2025-11-26 19:35:29.349930091 +0000 UTC Remote: 2025-11-26 19:35:29.264757105 +0000 UTC m=+20.963207688 (delta=85.172986ms)
I1126 19:35:29.385942 11611 fix.go:200] guest clock delta is within tolerance: 85.172986ms
I1126 19:35:29.385956 11611 start.go:83] releasing machines lock for "addons-198878", held for 20.985528157s
I1126 19:35:29.388880 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:29.389353 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:29.389381 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:29.390073 11611 ssh_runner.go:195] Run: cat /version.json
I1126 19:35:29.390134 11611 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1126 19:35:29.392899 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:29.393147 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:29.393321 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:29.393354 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:29.393511 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:29.393521 11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
I1126 19:35:29.393537 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:29.393760 11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
I1126 19:35:29.501861 11611 ssh_runner.go:195] Run: systemctl --version
I1126 19:35:29.508847 11611 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1126 19:35:30.035730 11611 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1126 19:35:30.043466 11611 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1126 19:35:30.043540 11611 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1126 19:35:30.067861 11611 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1126 19:35:30.067890 11611 start.go:496] detecting cgroup driver to use...
I1126 19:35:30.067982 11611 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1126 19:35:30.088718 11611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1126 19:35:30.107579 11611 docker.go:218] disabling cri-docker service (if available) ...
I1126 19:35:30.107634 11611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1126 19:35:30.125710 11611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1126 19:35:30.142753 11611 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1126 19:35:30.290410 11611 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1126 19:35:30.504760 11611 docker.go:234] disabling docker service ...
I1126 19:35:30.504841 11611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1126 19:35:30.522212 11611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1126 19:35:30.538584 11611 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1126 19:35:30.701383 11611 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1126 19:35:30.852017 11611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1126 19:35:30.869602 11611 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1126 19:35:30.894746 11611 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1126 19:35:30.894821 11611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1126 19:35:30.908043 11611 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1126 19:35:30.908126 11611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1126 19:35:30.921031 11611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1126 19:35:30.933588 11611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1126 19:35:30.945992 11611 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1126 19:35:30.959338 11611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1126 19:35:30.971942 11611 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1126 19:35:30.994077 11611 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1126 19:35:31.006970 11611 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1126 19:35:31.018306 11611 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1126 19:35:31.018385 11611 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1126 19:35:31.040637 11611 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1126 19:35:31.052793 11611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1126 19:35:31.197786 11611 ssh_runner.go:195] Run: sudo systemctl restart crio
I1126 19:35:31.320545 11611 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1126 19:35:31.320645 11611 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1126 19:35:31.326386 11611 start.go:564] Will wait 60s for crictl version
I1126 19:35:31.326469 11611 ssh_runner.go:195] Run: which crictl
I1126 19:35:31.331262 11611 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1126 19:35:31.368975 11611 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1126 19:35:31.369117 11611 ssh_runner.go:195] Run: crio --version
I1126 19:35:31.400593 11611 ssh_runner.go:195] Run: crio --version
I1126 19:35:31.432930 11611 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
I1126 19:35:31.437132 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:31.437582 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:31.437610 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:31.437808 11611 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1126 19:35:31.442987 11611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1126 19:35:31.459387 11611 kubeadm.go:884] updating cluster {Name:addons-198878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-198878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1126 19:35:31.459522 11611 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1126 19:35:31.459571 11611 ssh_runner.go:195] Run: sudo crictl images --output json
I1126 19:35:31.489382 11611 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
I1126 19:35:31.489447 11611 ssh_runner.go:195] Run: which lz4
I1126 19:35:31.494017 11611 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1126 19:35:31.499052 11611 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1126 19:35:31.499107 11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
I1126 19:35:33.028314 11611 crio.go:462] duration metric: took 1.534339111s to copy over tarball
I1126 19:35:33.028391 11611 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1126 19:35:34.704752 11611 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.676328192s)
I1126 19:35:34.704784 11611 crio.go:469] duration metric: took 1.676441228s to extract the tarball
I1126 19:35:34.704791 11611 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1126 19:35:34.747070 11611 ssh_runner.go:195] Run: sudo crictl images --output json
I1126 19:35:34.788943 11611 crio.go:514] all images are preloaded for cri-o runtime.
I1126 19:35:34.788978 11611 cache_images.go:86] Images are preloaded, skipping loading
I1126 19:35:34.788986 11611 kubeadm.go:935] updating node { 192.168.39.123 8443 v1.34.1 crio true true} ...
I1126 19:35:34.789068 11611 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-198878 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.123
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:addons-198878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1126 19:35:34.789175 11611 ssh_runner.go:195] Run: crio config
I1126 19:35:34.839584 11611 cni.go:84] Creating CNI manager for ""
I1126 19:35:34.839611 11611 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1126 19:35:34.839626 11611 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1126 19:35:34.839648 11611 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.123 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-198878 NodeName:addons-198878 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1126 19:35:34.839801 11611 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.123
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-198878"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.123"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.123"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1126 19:35:34.839883 11611 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1126 19:35:34.853716 11611 binaries.go:51] Found k8s binaries, skipping transfer
I1126 19:35:34.853779 11611 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1126 19:35:34.866800 11611 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I1126 19:35:34.889682 11611 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1126 19:35:34.913069 11611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
I1126 19:35:34.934736 11611 ssh_runner.go:195] Run: grep 192.168.39.123 control-plane.minikube.internal$ /etc/hosts
I1126 19:35:34.939316 11611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.123 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1126 19:35:34.954543 11611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1126 19:35:35.095406 11611 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1126 19:35:35.115829 11611 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878 for IP: 192.168.39.123
I1126 19:35:35.115856 11611 certs.go:195] generating shared ca certs ...
I1126 19:35:35.115874 11611 certs.go:227] acquiring lock for ca certs: {Name:mkec6f6093be68a4f0c7d5c64487ef4e93539f33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1126 19:35:35.116055 11611 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-7091/.minikube/ca.key
I1126 19:35:35.204411 11611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-7091/.minikube/ca.crt ...
I1126 19:35:35.204436 11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/ca.crt: {Name:mk5f1dcbeee7ab35dcd334ff3481a2f84c9aae3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1126 19:35:35.204608 11611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-7091/.minikube/ca.key ...
I1126 19:35:35.204620 11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/ca.key: {Name:mk6e0da3cd29b80eaa0b1f079dd9ca7c333201a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1126 19:35:35.204696 11611 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-7091/.minikube/proxy-client-ca.key
I1126 19:35:35.233957 11611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-7091/.minikube/proxy-client-ca.crt ...
I1126 19:35:35.233978 11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/proxy-client-ca.crt: {Name:mk6714651c1858f3eb22cb38368f74c902776653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1126 19:35:35.234126 11611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-7091/.minikube/proxy-client-ca.key ...
I1126 19:35:35.234138 11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/proxy-client-ca.key: {Name:mkea1a7fc500916b8dad6ebcedb9a4fa5d67c756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1126 19:35:35.234206 11611 certs.go:257] generating profile certs ...
I1126 19:35:35.234262 11611 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.key
I1126 19:35:35.234276 11611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt with IP's: []
I1126 19:35:35.380413 11611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt ...
I1126 19:35:35.380439 11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.crt: {Name:mkb52c346045c9a0090ac970d54ac6fa85cdde36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1126 19:35:35.380608 11611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.key ...
I1126 19:35:35.380620 11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/client.key: {Name:mk784ea1579a9da0c782da8a4e28ad4db5f4266c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1126 19:35:35.380688 11611 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.key.44518eaa
I1126 19:35:35.380706 11611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.crt.44518eaa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.123]
I1126 19:35:35.586774 11611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.crt.44518eaa ...
I1126 19:35:35.586802 11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.crt.44518eaa: {Name:mk162bf0c4de5afeaf80a5b426d47e902280785f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1126 19:35:35.586970 11611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.key.44518eaa ...
I1126 19:35:35.586983 11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.key.44518eaa: {Name:mk9ed0078f49c125458146cd027e59bb8d8c13ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1126 19:35:35.587058 11611 certs.go:382] copying /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.crt.44518eaa -> /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.crt
I1126 19:35:35.587144 11611 certs.go:386] copying /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.key.44518eaa -> /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.key
I1126 19:35:35.587192 11611 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/proxy-client.key
I1126 19:35:35.587209 11611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/proxy-client.crt with IP's: []
I1126 19:35:35.860895 11611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/proxy-client.crt ...
I1126 19:35:35.860925 11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/proxy-client.crt: {Name:mk60dcfb55bfefc30302229b7eb301ddc6fb74c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1126 19:35:35.861093 11611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/proxy-client.key ...
I1126 19:35:35.861105 11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/proxy-client.key: {Name:mk2ccb9d1359cd7942c01a64df7132791ff28560 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1126 19:35:35.861271 11611 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca-key.pem (1675 bytes)
I1126 19:35:35.861306 11611 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/ca.pem (1082 bytes)
I1126 19:35:35.861331 11611 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/cert.pem (1123 bytes)
I1126 19:35:35.861354 11611 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-7091/.minikube/certs/key.pem (1675 bytes)
I1126 19:35:35.861857 11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1126 19:35:35.902819 11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1126 19:35:35.935751 11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1126 19:35:35.970203 11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1126 19:35:36.001465 11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1126 19:35:36.033913 11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1126 19:35:36.064782 11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1126 19:35:36.095354 11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/profiles/addons-198878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1126 19:35:36.125193 11611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-7091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1126 19:35:36.155831 11611 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1126 19:35:36.180797 11611 ssh_runner.go:195] Run: openssl version
I1126 19:35:36.188104 11611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1126 19:35:36.203070 11611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1126 19:35:36.208723 11611 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:35 /usr/share/ca-certificates/minikubeCA.pem
I1126 19:35:36.208774 11611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1126 19:35:36.216421 11611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1126 19:35:36.230848 11611 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1126 19:35:36.235943 11611 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1126 19:35:36.236007 11611 kubeadm.go:401] StartCluster: {Name:addons-198878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-198878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1126 19:35:36.236063 11611 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1126 19:35:36.236142 11611 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1126 19:35:36.272535 11611 cri.go:89] found id: ""
I1126 19:35:36.272611 11611 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1126 19:35:36.285627 11611 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1126 19:35:36.298974 11611 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1126 19:35:36.313456 11611 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1126 19:35:36.313475 11611 kubeadm.go:158] found existing configuration files:
I1126 19:35:36.313517 11611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1126 19:35:36.325356 11611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1126 19:35:36.325409 11611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1126 19:35:36.337710 11611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1126 19:35:36.349411 11611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1126 19:35:36.349474 11611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1126 19:35:36.361980 11611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1126 19:35:36.373752 11611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1126 19:35:36.373823 11611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1126 19:35:36.386171 11611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1126 19:35:36.397344 11611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1126 19:35:36.397410 11611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1126 19:35:36.409466 11611 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1126 19:35:36.579153 11611 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1126 19:35:48.895219 11611 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
I1126 19:35:48.895300 11611 kubeadm.go:319] [preflight] Running pre-flight checks
I1126 19:35:48.895409 11611 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1126 19:35:48.895526 11611 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1126 19:35:48.895613 11611 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1126 19:35:48.895668 11611 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1126 19:35:48.897270 11611 out.go:252] - Generating certificates and keys ...
I1126 19:35:48.897343 11611 kubeadm.go:319] [certs] Using existing ca certificate authority
I1126 19:35:48.897408 11611 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1126 19:35:48.897502 11611 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1126 19:35:48.897588 11611 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1126 19:35:48.897686 11611 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1126 19:35:48.897762 11611 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1126 19:35:48.897848 11611 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1126 19:35:48.898006 11611 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-198878 localhost] and IPs [192.168.39.123 127.0.0.1 ::1]
I1126 19:35:48.898104 11611 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1126 19:35:48.898266 11611 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-198878 localhost] and IPs [192.168.39.123 127.0.0.1 ::1]
I1126 19:35:48.898365 11611 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1126 19:35:48.898467 11611 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1126 19:35:48.898535 11611 kubeadm.go:319] [certs] Generating "sa" key and public key
I1126 19:35:48.898585 11611 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1126 19:35:48.898648 11611 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1126 19:35:48.898723 11611 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1126 19:35:48.898791 11611 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1126 19:35:48.898853 11611 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1126 19:35:48.898901 11611 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1126 19:35:48.898990 11611 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1126 19:35:48.899048 11611 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1126 19:35:48.900406 11611 out.go:252] - Booting up control plane ...
I1126 19:35:48.900498 11611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1126 19:35:48.900571 11611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1126 19:35:48.900636 11611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1126 19:35:48.900728 11611 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1126 19:35:48.900816 11611 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1126 19:35:48.900948 11611 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1126 19:35:48.901064 11611 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1126 19:35:48.901186 11611 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1126 19:35:48.901348 11611 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1126 19:35:48.901458 11611 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1126 19:35:48.901515 11611 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002174283s
I1126 19:35:48.901588 11611 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1126 19:35:48.901683 11611 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.123:8443/livez
I1126 19:35:48.901770 11611 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1126 19:35:48.901834 11611 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1126 19:35:48.901898 11611 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.447789772s
I1126 19:35:48.901956 11611 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.488422193s
I1126 19:35:48.902016 11611 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501997583s
I1126 19:35:48.902137 11611 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1126 19:35:48.902273 11611 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1126 19:35:48.902393 11611 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1126 19:35:48.902597 11611 kubeadm.go:319] [mark-control-plane] Marking the node addons-198878 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1126 19:35:48.902678 11611 kubeadm.go:319] [bootstrap-token] Using token: xo527n.pd0o97bdcnwf3821
I1126 19:35:48.904158 11611 out.go:252] - Configuring RBAC rules ...
I1126 19:35:48.904274 11611 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1126 19:35:48.904378 11611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1126 19:35:48.904523 11611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1126 19:35:48.904698 11611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1126 19:35:48.904802 11611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1126 19:35:48.904873 11611 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1126 19:35:48.904981 11611 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1126 19:35:48.905036 11611 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1126 19:35:48.905073 11611 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1126 19:35:48.905092 11611 kubeadm.go:319]
I1126 19:35:48.905142 11611 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1126 19:35:48.905148 11611 kubeadm.go:319]
I1126 19:35:48.905228 11611 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1126 19:35:48.905238 11611 kubeadm.go:319]
I1126 19:35:48.905270 11611 kubeadm.go:319] mkdir -p $HOME/.kube
I1126 19:35:48.905333 11611 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1126 19:35:48.905375 11611 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1126 19:35:48.905381 11611 kubeadm.go:319]
I1126 19:35:48.905423 11611 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1126 19:35:48.905430 11611 kubeadm.go:319]
I1126 19:35:48.905465 11611 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1126 19:35:48.905475 11611 kubeadm.go:319]
I1126 19:35:48.905515 11611 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1126 19:35:48.905575 11611 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1126 19:35:48.905655 11611 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1126 19:35:48.905673 11611 kubeadm.go:319]
I1126 19:35:48.905751 11611 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1126 19:35:48.905820 11611 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1126 19:35:48.905826 11611 kubeadm.go:319]
I1126 19:35:48.905895 11611 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xo527n.pd0o97bdcnwf3821 \
I1126 19:35:48.906004 11611 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:c9a146404250e477d139e5ac0d4339741eaa7ea23ba8a3e74d2181ed46faf684 \
I1126 19:35:48.906024 11611 kubeadm.go:319] --control-plane
I1126 19:35:48.906031 11611 kubeadm.go:319]
I1126 19:35:48.906124 11611 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1126 19:35:48.906131 11611 kubeadm.go:319]
I1126 19:35:48.906223 11611 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xo527n.pd0o97bdcnwf3821 \
I1126 19:35:48.906367 11611 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:c9a146404250e477d139e5ac0d4339741eaa7ea23ba8a3e74d2181ed46faf684
I1126 19:35:48.906386 11611 cni.go:84] Creating CNI manager for ""
I1126 19:35:48.906396 11611 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1126 19:35:48.908011 11611 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1126 19:35:48.909325 11611 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1126 19:35:48.925100 11611 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1126 19:35:48.952333 11611 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1126 19:35:48.952401 11611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1126 19:35:48.952455 11611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-198878 minikube.k8s.io/updated_at=2025_11_26T19_35_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970 minikube.k8s.io/name=addons-198878 minikube.k8s.io/primary=true
I1126 19:35:49.126064 11611 ops.go:34] apiserver oom_adj: -16
I1126 19:35:49.126162 11611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1126 19:35:49.626439 11611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1126 19:35:50.126309 11611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1126 19:35:50.626908 11611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1126 19:35:51.126854 11611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1126 19:35:51.626393 11611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1126 19:35:52.127145 11611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1126 19:35:52.626651 11611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1126 19:35:53.126599 11611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1126 19:35:53.224995 11611 kubeadm.go:1114] duration metric: took 4.272643611s to wait for elevateKubeSystemPrivileges
I1126 19:35:53.225027 11611 kubeadm.go:403] duration metric: took 16.989023109s to StartCluster
I1126 19:35:53.225042 11611 settings.go:142] acquiring lock: {Name:mk37c98b12b8a7193cfde69315430fb7cd818f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1126 19:35:53.225194 11611 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21974-7091/kubeconfig
I1126 19:35:53.225637 11611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-7091/kubeconfig: {Name:mk17b8b187372462ddf3f30b5296315dcdc9fda2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1126 19:35:53.225851 11611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1126 19:35:53.225892 11611 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1126 19:35:53.226013 11611 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1126 19:35:53.226121 11611 config.go:182] Loaded profile config "addons-198878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:35:53.226129 11611 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-198878"
I1126 19:35:53.226146 11611 addons.go:70] Setting gcp-auth=true in profile "addons-198878"
I1126 19:35:53.226164 11611 mustload.go:66] Loading cluster: addons-198878
I1126 19:35:53.226178 11611 addons.go:70] Setting inspektor-gadget=true in profile "addons-198878"
I1126 19:35:53.226163 11611 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-198878"
I1126 19:35:53.226199 11611 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-198878"
I1126 19:35:53.226202 11611 addons.go:70] Setting ingress-dns=true in profile "addons-198878"
I1126 19:35:53.226195 11611 addons.go:70] Setting ingress=true in profile "addons-198878"
I1126 19:35:53.226210 11611 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-198878"
I1126 19:35:53.226224 11611 addons.go:239] Setting addon ingress-dns=true in "addons-198878"
I1126 19:35:53.226293 11611 host.go:66] Checking if "addons-198878" exists ...
I1126 19:35:53.226126 11611 addons.go:70] Setting yakd=true in profile "addons-198878"
I1126 19:35:53.226201 11611 addons.go:70] Setting metrics-server=true in profile "addons-198878"
I1126 19:35:53.226327 11611 addons.go:239] Setting addon yakd=true in "addons-198878"
I1126 19:35:53.226335 11611 addons.go:239] Setting addon metrics-server=true in "addons-198878"
I1126 19:35:53.226345 11611 host.go:66] Checking if "addons-198878" exists ...
I1126 19:35:53.226347 11611 config.go:182] Loaded profile config "addons-198878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:35:53.226371 11611 host.go:66] Checking if "addons-198878" exists ...
I1126 19:35:53.226172 11611 addons.go:70] Setting registry=true in profile "addons-198878"
I1126 19:35:53.226512 11611 addons.go:239] Setting addon registry=true in "addons-198878"
I1126 19:35:53.226538 11611 host.go:66] Checking if "addons-198878" exists ...
I1126 19:35:53.226137 11611 addons.go:70] Setting default-storageclass=true in profile "addons-198878"
I1126 19:35:53.226678 11611 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-198878"
I1126 19:35:53.226193 11611 addons.go:239] Setting addon inspektor-gadget=true in "addons-198878"
I1126 19:35:53.226850 11611 host.go:66] Checking if "addons-198878" exists ...
I1126 19:35:53.226211 11611 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-198878"
I1126 19:35:53.227498 11611 host.go:66] Checking if "addons-198878" exists ...
I1126 19:35:53.226222 11611 addons.go:70] Setting cloud-spanner=true in profile "addons-198878"
I1126 19:35:53.227736 11611 addons.go:239] Setting addon cloud-spanner=true in "addons-198878"
I1126 19:35:53.227763 11611 host.go:66] Checking if "addons-198878" exists ...
I1126 19:35:53.226220 11611 addons.go:239] Setting addon ingress=true in "addons-198878"
I1126 19:35:53.227838 11611 host.go:66] Checking if "addons-198878" exists ...
I1126 19:35:53.226230 11611 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-198878"
I1126 19:35:53.227899 11611 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-198878"
I1126 19:35:53.226230 11611 host.go:66] Checking if "addons-198878" exists ...
I1126 19:35:53.226233 11611 addons.go:70] Setting registry-creds=true in profile "addons-198878"
I1126 19:35:53.228530 11611 addons.go:239] Setting addon registry-creds=true in "addons-198878"
I1126 19:35:53.228557 11611 host.go:66] Checking if "addons-198878" exists ...
I1126 19:35:53.226236 11611 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-198878"
I1126 19:35:53.228650 11611 host.go:66] Checking if "addons-198878" exists ...
I1126 19:35:53.226241 11611 addons.go:70] Setting volcano=true in profile "addons-198878"
I1126 19:35:53.228862 11611 addons.go:239] Setting addon volcano=true in "addons-198878"
I1126 19:35:53.228888 11611 host.go:66] Checking if "addons-198878" exists ...
I1126 19:35:53.228951 11611 out.go:179] * Verifying Kubernetes components...
I1126 19:35:53.226244 11611 addons.go:70] Setting volumesnapshots=true in profile "addons-198878"
I1126 19:35:53.226239 11611 addons.go:70] Setting storage-provisioner=true in profile "addons-198878"
I1126 19:35:53.229364 11611 addons.go:239] Setting addon volumesnapshots=true in "addons-198878"
I1126 19:35:53.229486 11611 host.go:66] Checking if "addons-198878" exists ...
I1126 19:35:53.229373 11611 addons.go:239] Setting addon storage-provisioner=true in "addons-198878"
I1126 19:35:53.229570 11611 host.go:66] Checking if "addons-198878" exists ...
I1126 19:35:53.230406 11611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1126 19:35:53.233030 11611 host.go:66] Checking if "addons-198878" exists ...
I1126 19:35:53.233958 11611 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1126 19:35:53.235423 11611 addons.go:239] Setting addon default-storageclass=true in "addons-198878"
I1126 19:35:53.235457 11611 host.go:66] Checking if "addons-198878" exists ...
I1126 19:35:53.236147 11611 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1126 19:35:53.236154 11611 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1126 19:35:53.236474 11611 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-198878"
I1126 19:35:53.236511 11611 host.go:66] Checking if "addons-198878" exists ...
I1126 19:35:53.237006 11611 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1126 19:35:53.237030 11611 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1126 19:35:53.237021 11611 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
I1126 19:35:53.237103 11611 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1126 19:35:53.237832 11611 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
W1126 19:35:53.237628 11611 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1126 19:35:53.238200 11611 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1126 19:35:53.238206 11611 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
I1126 19:35:53.238225 11611 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1126 19:35:53.238244 11611 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1126 19:35:53.238255 11611 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1126 19:35:53.238271 11611 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1126 19:35:53.238285 11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1126 19:35:53.239011 11611 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1126 19:35:53.239022 11611 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1126 19:35:53.239459 11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1126 19:35:53.239148 11611 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1126 19:35:53.239562 11611 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1126 19:35:53.239964 11611 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1126 19:35:53.239992 11611 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1126 19:35:53.239994 11611 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1126 19:35:53.240438 11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1126 19:35:53.240000 11611 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1126 19:35:53.240474 11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1126 19:35:53.240015 11611 out.go:179] - Using image docker.io/registry:3.0.0
I1126 19:35:53.240031 11611 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
I1126 19:35:53.240058 11611 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1126 19:35:53.241379 11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1126 19:35:53.241740 11611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1126 19:35:53.241747 11611 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1126 19:35:53.241750 11611 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1126 19:35:53.241755 11611 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1126 19:35:53.241763 11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1126 19:35:53.242603 11611 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1126 19:35:53.242603 11611 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1126 19:35:53.242618 11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1126 19:35:53.242621 11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1126 19:35:53.243369 11611 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1126 19:35:53.243371 11611 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1126 19:35:53.244951 11611 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1126 19:35:53.245842 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.246324 11611 out.go:179] - Using image docker.io/busybox:stable
I1126 19:35:53.246330 11611 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
I1126 19:35:53.246789 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.247332 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.247581 11611 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1126 19:35:53.247622 11611 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1126 19:35:53.247634 11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1126 19:35:53.247646 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:53.247673 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.247845 11611 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1126 19:35:53.247863 11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1126 19:35:53.248531 11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
I1126 19:35:53.248585 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:53.248623 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.249034 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:53.249064 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.249545 11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
I1126 19:35:53.250075 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.250098 11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
I1126 19:35:53.250359 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.250468 11611 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1126 19:35:53.251628 11611 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1126 19:35:53.251664 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:53.251701 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.251703 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:53.251725 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.251913 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.252339 11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
I1126 19:35:53.252394 11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
I1126 19:35:53.252694 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.253386 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:53.253418 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.253522 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.253983 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.254108 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:53.254136 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.254225 11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
I1126 19:35:53.254316 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.254581 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:53.254614 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.254797 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.254858 11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
I1126 19:35:53.254902 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.254975 11611 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1126 19:35:53.255043 11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
I1126 19:35:53.255302 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:53.255338 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.255598 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:53.255632 11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
I1126 19:35:53.255659 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:53.255673 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:53.255687 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.255700 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.255753 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.255996 11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
I1126 19:35:53.256022 11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
I1126 19:35:53.256364 11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
I1126 19:35:53.257345 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.257650 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.257745 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:53.257775 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.257817 11611 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1126 19:35:53.257984 11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
I1126 19:35:53.258200 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:53.258236 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.258416 11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
I1126 19:35:53.259190 11611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1126 19:35:53.259205 11611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1126 19:35:53.261674 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.262076 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:35:53.262125 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:35:53.262300 11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
W1126 19:35:53.451606 11611 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39334->192.168.39.123:22: read: connection reset by peer
I1126 19:35:53.451636 11611 retry.go:31] will retry after 367.345981ms: ssh: handshake failed: read tcp 192.168.39.1:39334->192.168.39.123:22: read: connection reset by peer
I1126 19:35:53.813000 11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1126 19:35:53.828815 11611 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1126 19:35:53.828837 11611 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1126 19:35:53.893111 11611 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1126 19:35:53.893170 11611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1126 19:35:53.912638 11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1126 19:35:53.947758 11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1126 19:35:54.049726 11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1126 19:35:54.115997 11611 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1126 19:35:54.116024 11611 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1126 19:35:54.139396 11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1126 19:35:54.146998 11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
I1126 19:35:54.153926 11611 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1126 19:35:54.153950 11611 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1126 19:35:54.175760 11611 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1126 19:35:54.175782 11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1126 19:35:54.271854 11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1126 19:35:54.326596 11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1126 19:35:54.339279 11611 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1126 19:35:54.339304 11611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1126 19:35:54.341098 11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1126 19:35:54.487131 11611 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1126 19:35:54.487159 11611 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1126 19:35:54.731819 11611 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1126 19:35:54.731842 11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1126 19:35:54.782607 11611 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1126 19:35:54.782631 11611 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1126 19:35:54.819313 11611 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1126 19:35:54.819341 11611 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1126 19:35:55.037559 11611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1126 19:35:55.037589 11611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1126 19:35:55.087437 11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1126 19:35:55.143648 11611 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1126 19:35:55.143681 11611 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1126 19:35:55.346749 11611 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1126 19:35:55.346776 11611 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1126 19:35:55.359003 11611 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1126 19:35:55.359037 11611 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1126 19:35:55.594541 11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1126 19:35:55.677904 11611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1126 19:35:55.677931 11611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1126 19:35:55.817142 11611 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1126 19:35:55.817167 11611 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1126 19:35:55.859789 11611 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1126 19:35:55.859817 11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1126 19:35:55.901000 11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1126 19:35:56.167899 11611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1126 19:35:56.167924 11611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1126 19:35:56.193013 11611 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1126 19:35:56.193038 11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1126 19:35:56.259821 11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1126 19:35:56.511509 11611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1126 19:35:56.511541 11611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1126 19:35:56.822386 11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1126 19:35:56.944249 11611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1126 19:35:56.944277 11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1126 19:35:57.253687 11611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1126 19:35:57.253711 11611 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1126 19:35:57.641955 11611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1126 19:35:57.641977 11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1126 19:35:57.761873 11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.948835531s)
I1126 19:35:57.761971 11611 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.86874927s)
I1126 19:35:57.762002 11611 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.868863062s)
I1126 19:35:57.762005 11611 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1126 19:35:57.762644 11611 node_ready.go:35] waiting up to 6m0s for node "addons-198878" to be "Ready" ...
I1126 19:35:57.796472 11611 node_ready.go:49] node "addons-198878" is "Ready"
I1126 19:35:57.796509 11611 node_ready.go:38] duration metric: took 33.828906ms for node "addons-198878" to be "Ready" ...
I1126 19:35:57.796530 11611 api_server.go:52] waiting for apiserver process to appear ...
I1126 19:35:57.796587 11611 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1126 19:35:58.043896 11611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1126 19:35:58.043920 11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1126 19:35:58.265933 11611 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-198878" context rescaled to 1 replicas
I1126 19:35:58.407158 11611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1126 19:35:58.407203 11611 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1126 19:35:58.860254 11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1126 19:36:00.704182 11611 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1126 19:36:00.706928 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:36:00.707334 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:36:00.707360 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:36:00.707518 11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
I1126 19:36:01.033343 11611 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1126 19:36:01.326741 11611 addons.go:239] Setting addon gcp-auth=true in "addons-198878"
I1126 19:36:01.326797 11611 host.go:66] Checking if "addons-198878" exists ...
I1126 19:36:01.328759 11611 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1126 19:36:01.331455 11611 main.go:143] libmachine: domain addons-198878 has defined MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:36:01.331859 11611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0c:6e", ip: ""} in network mk-addons-198878: {Iface:virbr1 ExpiryTime:2025-11-26 20:35:24 +0000 UTC Type:0 Mac:52:54:00:39:0c:6e Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-198878 Clientid:01:52:54:00:39:0c:6e}
I1126 19:36:01.331880 11611 main.go:143] libmachine: domain addons-198878 has defined IP address 192.168.39.123 and MAC address 52:54:00:39:0c:6e in network mk-addons-198878
I1126 19:36:01.332043 11611 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21974-7091/.minikube/machines/addons-198878/id_rsa Username:docker}
I1126 19:36:02.976883 11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.064198557s)
I1126 19:36:02.976920 11611 addons.go:495] Verifying addon ingress=true in "addons-198878"
I1126 19:36:02.977034 11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.927284356s)
I1126 19:36:02.977111 11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.837659245s)
I1126 19:36:02.976987 11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.029179445s)
I1126 19:36:02.977212 11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (8.830179099s)
I1126 19:36:02.977227 11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.705350345s)
I1126 19:36:02.977287 11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.650668035s)
I1126 19:36:02.977315 11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.636195335s)
I1126 19:36:02.977348 11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.889889495s)
I1126 19:36:02.977412 11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.382836318s)
I1126 19:36:02.977439 11611 addons.go:495] Verifying addon registry=true in "addons-198878"
I1126 19:36:02.977474 11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.07645013s)
I1126 19:36:02.977541 11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.717681249s)
I1126 19:36:02.977493 11611 addons.go:495] Verifying addon metrics-server=true in "addons-198878"
I1126 19:36:02.977654 11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.155237401s)
W1126 19:36:02.978165 11611 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1126 19:36:02.978194 11611 retry.go:31] will retry after 257.967647ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1126 19:36:02.977686 11611 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.181082608s)
I1126 19:36:02.978246 11611 api_server.go:72] duration metric: took 9.75228674s to wait for apiserver process to appear ...
I1126 19:36:02.978259 11611 api_server.go:88] waiting for apiserver healthz status ...
I1126 19:36:02.978280 11611 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8443/healthz ...
I1126 19:36:02.979374 11611 out.go:179] * Verifying ingress addon...
I1126 19:36:02.979383 11611 out.go:179] * Verifying registry addon...
I1126 19:36:02.980103 11611 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-198878 service yakd-dashboard -n yakd-dashboard
I1126 19:36:02.981500 11611 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1126 19:36:02.981746 11611 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1126 19:36:02.998500 11611 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1126 19:36:02.998518 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:03.001569 11611 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1126 19:36:03.001593 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:03.009166 11611 api_server.go:279] https://192.168.39.123:8443/healthz returned 200:
ok
I1126 19:36:03.027346 11611 api_server.go:141] control plane version: v1.34.1
I1126 19:36:03.027376 11611 api_server.go:131] duration metric: took 49.110394ms to wait for apiserver health ...
I1126 19:36:03.027384 11611 system_pods.go:43] waiting for kube-system pods to appear ...
W1126 19:36:03.070420 11611 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I1126 19:36:03.100737 11611 system_pods.go:59] 17 kube-system pods found
I1126 19:36:03.100783 11611 system_pods.go:61] "amd-gpu-device-plugin-zt7pv" [ffa55995-0947-4f78-957d-397eb61020a5] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1126 19:36:03.100794 11611 system_pods.go:61] "coredns-66bc5c9577-6rsq5" [8ea66335-1b9c-4fc6-8209-2b1db648b79f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1126 19:36:03.100804 11611 system_pods.go:61] "coredns-66bc5c9577-wrds5" [0753c05f-adb3-4630-8ba7-2d36c8c860a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1126 19:36:03.100811 11611 system_pods.go:61] "etcd-addons-198878" [811da56a-2483-43f0-95de-c963c1e4b316] Running
I1126 19:36:03.100816 11611 system_pods.go:61] "kube-apiserver-addons-198878" [1068adea-c1ab-4663-b8ad-fd2c00001978] Running
I1126 19:36:03.100821 11611 system_pods.go:61] "kube-controller-manager-addons-198878" [2ee761c0-b054-468a-b51f-9a79467fb150] Running
I1126 19:36:03.100829 11611 system_pods.go:61] "kube-ingress-dns-minikube" [b0787d04-501a-496c-8f26-3ecc20b7f3f3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1126 19:36:03.100834 11611 system_pods.go:61] "kube-proxy-qcc2j" [6c819ab2-e6e9-4eab-a9fe-f9bcdb82f78b] Running
I1126 19:36:03.100840 11611 system_pods.go:61] "kube-scheduler-addons-198878" [89b438dc-2243-4e7b-86d7-c94c4cc39ccd] Running
I1126 19:36:03.100849 11611 system_pods.go:61] "metrics-server-85b7d694d7-8krt2" [437ee4fe-01d9-47d9-8864-e19c70cc2b3e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1126 19:36:03.100858 11611 system_pods.go:61] "nvidia-device-plugin-daemonset-rhjld" [67364572-4090-46f0-bd16-407a2f2eecf7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1126 19:36:03.100867 11611 system_pods.go:61] "registry-6b586f9694-frf72" [7122caf5-586e-4824-aa05-e6968244eddd] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1126 19:36:03.100875 11611 system_pods.go:61] "registry-creds-764b6fb674-gt5ft" [da7ea709-5fa9-42a2-b62e-a749fa515bdb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1126 19:36:03.100883 11611 system_pods.go:61] "registry-proxy-6ltms" [2e78d651-29c0-42f1-a079-f759abd8acb2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1126 19:36:03.100891 11611 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wp4z8" [676c6bdf-5a1c-4a1f-b401-7fe966339e87] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1126 19:36:03.100899 11611 system_pods.go:61] "snapshot-controller-7d9fbc56b8-zj5kh" [a0ca0044-d67c-4946-8baf-0362d9c8c372] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1126 19:36:03.100908 11611 system_pods.go:61] "storage-provisioner" [4aa788b3-9723-47cc-bc23-a6c4b4b2c70d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1126 19:36:03.100916 11611 system_pods.go:74] duration metric: took 73.525897ms to wait for pod list to return data ...
I1126 19:36:03.100929 11611 default_sa.go:34] waiting for default service account to be created ...
I1126 19:36:03.110064 11611 default_sa.go:45] found service account: "default"
I1126 19:36:03.110112 11611 default_sa.go:55] duration metric: took 9.175646ms for default service account to be created ...
I1126 19:36:03.110123 11611 system_pods.go:116] waiting for k8s-apps to be running ...
I1126 19:36:03.121823 11611 system_pods.go:86] 17 kube-system pods found
I1126 19:36:03.121867 11611 system_pods.go:89] "amd-gpu-device-plugin-zt7pv" [ffa55995-0947-4f78-957d-397eb61020a5] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1126 19:36:03.121879 11611 system_pods.go:89] "coredns-66bc5c9577-6rsq5" [8ea66335-1b9c-4fc6-8209-2b1db648b79f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1126 19:36:03.121891 11611 system_pods.go:89] "coredns-66bc5c9577-wrds5" [0753c05f-adb3-4630-8ba7-2d36c8c860a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1126 19:36:03.121897 11611 system_pods.go:89] "etcd-addons-198878" [811da56a-2483-43f0-95de-c963c1e4b316] Running
I1126 19:36:03.121904 11611 system_pods.go:89] "kube-apiserver-addons-198878" [1068adea-c1ab-4663-b8ad-fd2c00001978] Running
I1126 19:36:03.121910 11611 system_pods.go:89] "kube-controller-manager-addons-198878" [2ee761c0-b054-468a-b51f-9a79467fb150] Running
I1126 19:36:03.121918 11611 system_pods.go:89] "kube-ingress-dns-minikube" [b0787d04-501a-496c-8f26-3ecc20b7f3f3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1126 19:36:03.121926 11611 system_pods.go:89] "kube-proxy-qcc2j" [6c819ab2-e6e9-4eab-a9fe-f9bcdb82f78b] Running
I1126 19:36:03.121932 11611 system_pods.go:89] "kube-scheduler-addons-198878" [89b438dc-2243-4e7b-86d7-c94c4cc39ccd] Running
I1126 19:36:03.121940 11611 system_pods.go:89] "metrics-server-85b7d694d7-8krt2" [437ee4fe-01d9-47d9-8864-e19c70cc2b3e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1126 19:36:03.121953 11611 system_pods.go:89] "nvidia-device-plugin-daemonset-rhjld" [67364572-4090-46f0-bd16-407a2f2eecf7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1126 19:36:03.121962 11611 system_pods.go:89] "registry-6b586f9694-frf72" [7122caf5-586e-4824-aa05-e6968244eddd] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1126 19:36:03.121973 11611 system_pods.go:89] "registry-creds-764b6fb674-gt5ft" [da7ea709-5fa9-42a2-b62e-a749fa515bdb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1126 19:36:03.121980 11611 system_pods.go:89] "registry-proxy-6ltms" [2e78d651-29c0-42f1-a079-f759abd8acb2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1126 19:36:03.121989 11611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wp4z8" [676c6bdf-5a1c-4a1f-b401-7fe966339e87] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1126 19:36:03.121999 11611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zj5kh" [a0ca0044-d67c-4946-8baf-0362d9c8c372] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1126 19:36:03.122010 11611 system_pods.go:89] "storage-provisioner" [4aa788b3-9723-47cc-bc23-a6c4b4b2c70d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1126 19:36:03.122021 11611 system_pods.go:126] duration metric: took 11.891567ms to wait for k8s-apps to be running ...
I1126 19:36:03.122036 11611 system_svc.go:44] waiting for kubelet service to be running ....
I1126 19:36:03.122114 11611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1126 19:36:03.236505 11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1126 19:36:03.491222 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:03.493983 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:03.990732 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:03.997049 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:04.096013 11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.235700075s)
I1126 19:36:04.096044 11611 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-198878"
I1126 19:36:04.096045 11611 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.767266622s)
I1126 19:36:04.096141 11611 system_svc.go:56] duration metric: took 974.10069ms WaitForService to wait for kubelet
I1126 19:36:04.096167 11611 kubeadm.go:587] duration metric: took 10.870208638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1126 19:36:04.096188 11611 node_conditions.go:102] verifying NodePressure condition ...
I1126 19:36:04.097316 11611 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1126 19:36:04.097411 11611 out.go:179] * Verifying csi-hostpath-driver addon...
I1126 19:36:04.098504 11611 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1126 19:36:04.099341 11611 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1126 19:36:04.099527 11611 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1126 19:36:04.099544 11611 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1126 19:36:04.145634 11611 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1126 19:36:04.145661 11611 node_conditions.go:123] node cpu capacity is 2
I1126 19:36:04.145673 11611 node_conditions.go:105] duration metric: took 49.479037ms to run NodePressure ...
I1126 19:36:04.145684 11611 start.go:242] waiting for startup goroutines ...
I1126 19:36:04.178149 11611 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1126 19:36:04.178175 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:04.192444 11611 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1126 19:36:04.192475 11611 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1126 19:36:04.298631 11611 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1126 19:36:04.298651 11611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1126 19:36:04.426979 11611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1126 19:36:04.491864 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:04.492017 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:04.608072 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:04.989458 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:04.993054 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:05.110357 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:05.239352 11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.002801877s)
I1126 19:36:05.496466 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:05.497651 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:05.638671 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:05.699035 11611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.272006592s)
I1126 19:36:05.700487 11611 addons.go:495] Verifying addon gcp-auth=true in "addons-198878"
I1126 19:36:05.702062 11611 out.go:179] * Verifying gcp-auth addon...
I1126 19:36:05.704026 11611 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1126 19:36:05.760626 11611 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1126 19:36:05.760651 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:05.994444 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:05.994690 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:06.106998 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:06.211840 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:06.499948 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:06.501279 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:06.620062 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:06.709415 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:06.990625 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:06.991421 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:07.107002 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:07.211382 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:07.486148 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:07.486458 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:07.605466 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:07.707711 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:07.987522 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:07.992346 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:08.106370 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:08.208794 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:08.486674 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:08.487151 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:08.608902 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:08.711370 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:08.986687 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:08.989490 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:09.105298 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:09.208491 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:09.486616 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:09.486835 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:09.605673 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:09.708106 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:09.985749 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:09.987391 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:10.105701 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:10.211150 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:10.488702 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:10.489002 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:10.603570 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:10.711877 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:10.986548 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:10.987317 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:11.103494 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:11.209118 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:11.485287 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:11.485510 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:11.605336 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:11.708777 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:11.987357 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:11.987770 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:12.104796 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:12.208477 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:12.487069 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:12.492496 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:12.606045 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:12.709286 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:12.987760 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:12.988937 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:13.105165 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:13.207689 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:13.487332 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:13.491095 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:13.718664 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:13.720994 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:13.987104 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:13.987140 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:14.104040 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:14.209020 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:14.487879 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:14.489777 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:14.605988 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:14.710793 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:14.986386 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:14.986995 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:15.114075 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:15.209337 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:15.486577 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:15.486650 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:15.603945 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:15.716202 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:15.989416 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:15.993517 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:16.105160 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:16.209295 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:16.489446 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:16.490216 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:16.606249 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:16.709977 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:16.989823 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:16.992439 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:17.103629 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:17.208279 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:17.487267 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:17.487401 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:17.603584 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:17.709320 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:17.991005 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:17.993415 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:18.105491 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:18.208691 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:18.486462 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:18.486468 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:18.603285 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:18.717879 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:19.255171 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:19.264748 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:19.264807 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:19.264958 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:19.485362 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:19.487219 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:19.605144 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:19.708638 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:19.986278 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:19.986527 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:20.106477 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:20.212070 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:20.486233 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:20.487248 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:20.606853 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:20.709268 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:20.988700 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:20.988792 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:21.104582 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:21.207442 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:21.488192 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:21.488327 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:21.605590 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:21.712171 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:21.989471 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:21.989907 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:22.105177 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:22.208563 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:22.494405 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:22.495848 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:22.608423 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:22.709066 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:22.989727 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:22.990050 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:23.109011 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:23.211002 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:23.486254 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:23.487368 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:23.606886 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:23.712661 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:23.988158 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:23.988623 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:24.107501 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:24.208703 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:24.489208 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:24.492980 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:24.607265 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:24.875787 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:25.022444 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:25.022536 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:25.103370 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:25.208331 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:25.485269 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:25.487836 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:25.603719 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:25.710451 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:25.988844 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:25.991143 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:26.105799 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:26.208159 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:26.487941 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:26.488197 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:26.610816 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:26.711438 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:26.988554 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:26.988678 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:27.104906 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:27.211687 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:27.490678 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:27.491397 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:27.605422 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:27.709918 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:27.986857 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:27.991254 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:28.104504 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:28.211676 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:28.486765 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:28.486908 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:28.603951 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:28.710810 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:29.039950 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:29.040032 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:29.104880 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:29.210332 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:29.486765 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:29.490387 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:29.605809 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:29.709672 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:30.234603 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:30.234694 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:30.234910 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:30.235218 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:30.487643 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:30.487909 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:30.602666 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:30.709504 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:30.986189 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:30.987847 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:31.106001 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:31.208478 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:31.486843 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:31.486859 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:31.603556 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:31.708238 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:31.988227 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:31.989189 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:32.104678 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:32.208363 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:32.486531 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:32.488065 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:32.608529 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:32.825732 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:33.004727 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:33.009413 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:33.432039 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:33.432854 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:33.488777 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:33.489648 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:33.603318 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:33.711296 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:33.986508 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:33.986537 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:34.105666 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:34.208395 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:34.486866 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:34.488238 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:34.603881 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:34.709073 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:34.986378 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:34.987365 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:35.106037 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:35.211738 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:35.493191 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:35.493517 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:35.605567 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:35.712771 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:35.987227 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:35.987860 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:36.103650 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:36.211521 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:36.486814 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:36.487072 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:36.603590 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:36.708474 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:36.986458 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1126 19:36:36.987153 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:37.109079 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:37.208801 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:37.488353 11611 kapi.go:107] duration metric: took 34.506606428s to wait for kubernetes.io/minikube-addons=registry ...
I1126 19:36:37.489542 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:37.606541 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:37.713836 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:37.985587 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:38.107437 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:38.212647 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:38.488990 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:38.604377 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:38.711757 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:38.989079 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:39.107014 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:39.208897 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:39.489856 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:39.604340 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:39.709729 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:39.989251 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:40.105384 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:40.210689 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:40.486868 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:40.605226 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:40.708011 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:40.987195 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:41.104281 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:41.208558 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:41.485989 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:41.635352 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:41.709968 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:41.990231 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:42.105439 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:42.212031 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:42.485649 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:42.608693 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:42.710655 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:42.985249 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:43.105161 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:43.211444 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:43.485775 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:43.618058 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:43.716692 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:43.987323 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:44.104582 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:44.209761 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:44.488009 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:44.605945 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:44.711855 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:44.985558 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:45.102863 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:45.208767 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:45.488476 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:45.603931 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:45.708797 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:45.988957 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:46.107402 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:46.210590 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:46.488873 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:46.603309 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:46.708069 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:47.037669 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:47.104273 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:47.209723 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:47.485534 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:47.616379 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:47.714613 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:47.985314 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:48.109393 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:48.208135 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:48.488020 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:48.604524 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:48.707527 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:48.985308 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:49.105713 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:49.210053 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:49.487312 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:49.603701 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:49.710408 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:49.989190 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:50.115043 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:50.212246 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:50.488332 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:50.604130 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:50.709533 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:50.989570 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:51.102993 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:51.212397 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:51.760175 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:51.760434 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:51.764040 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:51.998386 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:52.194692 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:52.212283 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:52.490622 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:52.605594 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:52.717977 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:52.989416 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:53.104503 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:53.208891 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:53.486553 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:53.603741 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:53.710661 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:53.985350 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:54.105351 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:54.212996 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:54.492732 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:54.603628 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:54.711509 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:54.988583 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:55.103408 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:55.209271 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:55.489909 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:55.605340 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:55.716521 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:55.993894 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:56.104480 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:56.208963 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:56.486056 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:56.607858 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:56.713768 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:56.985661 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:57.104840 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:57.210105 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:57.489224 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:57.607303 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:57.711293 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:57.988507 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:58.103586 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:58.207171 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:58.486568 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:58.603857 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:58.708860 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:58.989635 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:59.103598 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:59.207682 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:59.487022 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:36:59.605864 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1126 19:36:59.708266 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:36:59.987009 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:00.103719 11611 kapi.go:107] duration metric: took 56.004374347s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1126 19:37:00.207837 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:00.485558 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:00.708048 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:00.986648 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:01.208166 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:01.485905 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:01.708569 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:01.985281 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:02.208363 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:02.485510 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:02.707668 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:02.985372 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:03.207610 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:03.485945 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:03.707439 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:03.985728 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:04.208702 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:04.485102 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:04.707198 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:04.986469 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:05.208201 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:05.603565 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:05.709549 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:05.985796 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:06.208395 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:06.485037 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:06.709181 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:06.986721 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:07.209727 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:07.487684 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:07.709785 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:07.989479 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:08.208824 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:08.486140 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:08.709162 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:08.986162 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:09.209752 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:09.486267 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:09.710977 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:09.987974 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:10.212284 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:10.485920 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:10.711250 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:10.989859 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:11.209576 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:11.485279 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:11.709675 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:11.986221 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:12.212281 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:12.487229 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:12.708565 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:12.998518 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:13.209189 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:13.485099 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:13.711987 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:13.986703 11611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1126 19:37:14.208472 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:14.489290 11611 kapi.go:107] duration metric: took 1m11.5077899s to wait for app.kubernetes.io/name=ingress-nginx ...
I1126 19:37:14.708080 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:15.208068 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:15.713135 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:16.209616 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:16.707251 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:17.211697 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:17.707828 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:18.208209 11611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1126 19:37:18.709028 11611 kapi.go:107] duration metric: took 1m13.004999661s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1126 19:37:18.710751 11611 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-198878 cluster.
I1126 19:37:18.712060 11611 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1126 19:37:18.713287 11611 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1126 19:37:18.714553 11611 out.go:179] * Enabled addons: cloud-spanner, storage-provisioner, registry-creds, nvidia-device-plugin, amd-gpu-device-plugin, inspektor-gadget, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I1126 19:37:18.715743 11611 addons.go:530] duration metric: took 1m25.489739712s for enable addons: enabled=[cloud-spanner storage-provisioner registry-creds nvidia-device-plugin amd-gpu-device-plugin inspektor-gadget ingress-dns metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I1126 19:37:18.715783 11611 start.go:247] waiting for cluster config update ...
I1126 19:37:18.715806 11611 start.go:256] writing updated cluster config ...
I1126 19:37:18.716055 11611 ssh_runner.go:195] Run: rm -f paused
I1126 19:37:18.723210 11611 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1126 19:37:18.726773 11611 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6rsq5" in "kube-system" namespace to be "Ready" or be gone ...
I1126 19:37:18.733896 11611 pod_ready.go:94] pod "coredns-66bc5c9577-6rsq5" is "Ready"
I1126 19:37:18.733920 11611 pod_ready.go:86] duration metric: took 7.129783ms for pod "coredns-66bc5c9577-6rsq5" in "kube-system" namespace to be "Ready" or be gone ...
I1126 19:37:18.736723 11611 pod_ready.go:83] waiting for pod "etcd-addons-198878" in "kube-system" namespace to be "Ready" or be gone ...
I1126 19:37:18.743667 11611 pod_ready.go:94] pod "etcd-addons-198878" is "Ready"
I1126 19:37:18.743685 11611 pod_ready.go:86] duration metric: took 6.947246ms for pod "etcd-addons-198878" in "kube-system" namespace to be "Ready" or be gone ...
I1126 19:37:18.746334 11611 pod_ready.go:83] waiting for pod "kube-apiserver-addons-198878" in "kube-system" namespace to be "Ready" or be gone ...
I1126 19:37:18.751960 11611 pod_ready.go:94] pod "kube-apiserver-addons-198878" is "Ready"
I1126 19:37:18.751976 11611 pod_ready.go:86] duration metric: took 5.627398ms for pod "kube-apiserver-addons-198878" in "kube-system" namespace to be "Ready" or be gone ...
I1126 19:37:18.754570 11611 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-198878" in "kube-system" namespace to be "Ready" or be gone ...
I1126 19:37:19.127739 11611 pod_ready.go:94] pod "kube-controller-manager-addons-198878" is "Ready"
I1126 19:37:19.127763 11611 pod_ready.go:86] duration metric: took 373.177086ms for pod "kube-controller-manager-addons-198878" in "kube-system" namespace to be "Ready" or be gone ...
I1126 19:37:19.326895 11611 pod_ready.go:83] waiting for pod "kube-proxy-qcc2j" in "kube-system" namespace to be "Ready" or be gone ...
I1126 19:37:19.728612 11611 pod_ready.go:94] pod "kube-proxy-qcc2j" is "Ready"
I1126 19:37:19.728636 11611 pod_ready.go:86] duration metric: took 401.717594ms for pod "kube-proxy-qcc2j" in "kube-system" namespace to be "Ready" or be gone ...
I1126 19:37:19.928532 11611 pod_ready.go:83] waiting for pod "kube-scheduler-addons-198878" in "kube-system" namespace to be "Ready" or be gone ...
I1126 19:37:20.328225 11611 pod_ready.go:94] pod "kube-scheduler-addons-198878" is "Ready"
I1126 19:37:20.328257 11611 pod_ready.go:86] duration metric: took 399.701466ms for pod "kube-scheduler-addons-198878" in "kube-system" namespace to be "Ready" or be gone ...
I1126 19:37:20.328273 11611 pod_ready.go:40] duration metric: took 1.60503412s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1126 19:37:20.373111 11611 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
I1126 19:37:20.374807 11611 out.go:179] * Done! kubectl is now configured to use "addons-198878" cluster and "default" namespace by default
==> CRI-O <==
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.315099893Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764186026315073311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4e093ff-931c-44a5-8395-01a3e07c5f01 name=/runtime.v1.ImageService/ImageFsInfo
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.317775114Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bd4f3cbf-71d8-4a19-bafe-e872e7c167d6 name=/runtime.v1.RuntimeService/ListContainers
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.317926755Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bd4f3cbf-71d8-4a19-bafe-e872e7c167d6 name=/runtime.v1.RuntimeService/ListContainers
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.318354872Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d687b1cb2df562e6432704428175fe90e123a2ab6d6b328bf7647c520c27a014,PodSandboxId:117709cb2d9a99b6a122c70ad16d593f25c596424c2f42c0a849877584578945,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764185882028877693,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cb70cb39-5ff1-4d2b-b014-86048256ca26,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dcc6e175d9f7e28a744e9f7320c1aead308bd08082d3013028cd9b54bd13471,PodSandboxId:a702030515d60f4a28f4e3dbea4be830f9053c8aa7b80c925cc1e3232c7ba49b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764185843889580516,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b398ed93-d3e3-42f0-9ff8-eb0a88b0786a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8e34a5c12ee5bd5870e5fde9ddbd80fb4f59a62e257b63644bce1d3dadd28a,PodSandboxId:9f66c7c3ecc73fe5bdbf87ab63e873a1c11cfc56983a354e5833a25a3759eccd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764185834250659196,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-dg8xd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 21f3aeeb-571c-46e7-a767-3ebdf23216ba,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8fe1ba22354a6d44e7434a4850160b53d2fa3b4b2ecd587f25a1eda6ed9eba5a,PodSandboxId:d9b1194cb11d531db60527c495a37a19111b0b178f42626fc560d9931e6eada4,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1764185825280374108,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cjbkr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 73f9be4a-f818-4127-8255-899e6c553774,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:557a2d941c8b530b3be0bf133e15ba9c2cf242beb634d3ec48c54ae49a910a0b,PodSandboxId:d1ef08ee18a309e28a2cdd922207017eda217d545de266fd218e7e14e5e19bb9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764185810222114056,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7hjrv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 37c4d2b9-d287-4912-812c-3a3720e73da3,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d295efad8604d4f06038ded7e4666d028c07e5a3dd06d7a300f3f5d9815bd1a,PodSandboxId:e130e2b91c88169054068a8c1954e9e72f7652c668f166882ad2e64d9b35d929,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1764185798022440827,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-p6gkd,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c82a99a0-a467-4df0-8c9c-5b91d82b7c2c,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f5cfada7e968356cd9f43fd22734f1215f0689883dc41088e7749e552cca56,PodSandboxId:157f9cb5e65f4aadd5f806c7710bee54cbcd39729d2abc02ec7fd597e2332159,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764185793626661239,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0787d04-501a-496c-8f26-3ecc20b7f3f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac0a7ddcca4f6ddea2e8dbd02f945a65c656c5197043e0794e89ecca0f9ba35e,PodSandboxId:b111919f515bb107f30a72367481b2ad00231ccc00abc472e0275b5773f0bafc,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},
Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764185766088243456,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zt7pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa55995-0947-4f78-957d-397eb61020a5,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6560862725d64d696c066e82421c3884ddf391f1b99d75d5dd1cd0f58c1b7171,PodSandboxId:beb1aaf56a9902d3e869b77102b5fb5a84429746788ea9e70c7824773c07a8ba,Metadata:&ContainerMetadata
{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764185762423846869,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4aa788b3-9723-47cc-bc23-a6c4b4b2c70d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e80d4f44f2fa1846c12019372134bbc488b52b1bf782994f77475b273b5ba0c2,PodSandboxId:45857bd9cf26136267ebdd22104ff9a9ca6338f23dbb3f904a5f7c40737056c2,Metadata:&ContainerMetadata{Name:coredn
s,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764185755165528956,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6rsq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea66335-1b9c-4fc6-8209-2b1db648b79f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1122a8f344ffeb1984b48a8747125afcf335fc5a63852549889a34e589dbdd4,PodSandboxId:6b5744ddd945ed364619d478d9078f8b816d04184d4c1a2c8d782c4a209ad500,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764185754384804082,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qcc2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c819ab2-e6e9-4eab-a9fe-f9bcdb82f78b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:039227d5c32668d7a544e8dab27d32bdcf7a9493668abe7212bccfe5a90a90ad,PodSandboxId:8458f2d683addadc809c074ee3e60968b8338397c423646019270e6ca248d596,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764185742633940598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33f116ea56476240aa27cfbf6746e3fb,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"T
CP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5be95f29d66ce013ac18e78d2b0377cc9ae86b14f1ef875c74630026b4a5651f,PodSandboxId:8c92f4c64e71110c7c6279c6db27b847b287d84cd978533f96bdbdebf53a4e5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764185742396560035,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3113d9f2abbedcec3e63d05ab89f093,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0bebab37664092b74b019d66b6138297725eedc3ae1f6f34da142e4e366a169,PodSandboxId:5f66c1af5779926c722ab379bc0e661342a3a7e5dafeeaf0f21fd98b328d8ccc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1764185742232176926,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a85665ce676
3c8ae58422727e16d0b19,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81f4d14092622fa6ce01768db00090dc0df11716cb9afc5a98f33684c32db95,PodSandboxId:bb05b454535e0836da60a5060942a7c38d0282c9389a977d97c31fd2f28b8836,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764185742162776588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name:
kube-apiserver-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b81724d889d88c6b1304a265b5f3c84,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bd4f3cbf-71d8-4a19-bafe-e872e7c167d6 name=/runtime.v1.RuntimeService/ListContainers
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.366712554Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ae47ff3-03b9-4986-aac8-3c78022b0646 name=/runtime.v1.RuntimeService/Version
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.366892429Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ae47ff3-03b9-4986-aac8-3c78022b0646 name=/runtime.v1.RuntimeService/Version
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.368460729Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa25886a-4d61-43d4-9ee6-a0a15b451fe3 name=/runtime.v1.ImageService/ImageFsInfo
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.369747846Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764186026369724438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa25886a-4d61-43d4-9ee6-a0a15b451fe3 name=/runtime.v1.ImageService/ImageFsInfo
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.370627368Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98597494-f999-4f5e-9b2c-c49bb17ffdb3 name=/runtime.v1.RuntimeService/ListContainers
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.370728770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98597494-f999-4f5e-9b2c-c49bb17ffdb3 name=/runtime.v1.RuntimeService/ListContainers
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.371086209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d687b1cb2df562e6432704428175fe90e123a2ab6d6b328bf7647c520c27a014,PodSandboxId:117709cb2d9a99b6a122c70ad16d593f25c596424c2f42c0a849877584578945,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764185882028877693,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cb70cb39-5ff1-4d2b-b014-86048256ca26,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dcc6e175d9f7e28a744e9f7320c1aead308bd08082d3013028cd9b54bd13471,PodSandboxId:a702030515d60f4a28f4e3dbea4be830f9053c8aa7b80c925cc1e3232c7ba49b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764185843889580516,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b398ed93-d3e3-42f0-9ff8-eb0a88b0786a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8e34a5c12ee5bd5870e5fde9ddbd80fb4f59a62e257b63644bce1d3dadd28a,PodSandboxId:9f66c7c3ecc73fe5bdbf87ab63e873a1c11cfc56983a354e5833a25a3759eccd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764185834250659196,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-dg8xd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 21f3aeeb-571c-46e7-a767-3ebdf23216ba,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8fe1ba22354a6d44e7434a4850160b53d2fa3b4b2ecd587f25a1eda6ed9eba5a,PodSandboxId:d9b1194cb11d531db60527c495a37a19111b0b178f42626fc560d9931e6eada4,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1764185825280374108,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cjbkr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 73f9be4a-f818-4127-8255-899e6c553774,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:557a2d941c8b530b3be0bf133e15ba9c2cf242beb634d3ec48c54ae49a910a0b,PodSandboxId:d1ef08ee18a309e28a2cdd922207017eda217d545de266fd218e7e14e5e19bb9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764185810222114056,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7hjrv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 37c4d2b9-d287-4912-812c-3a3720e73da3,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d295efad8604d4f06038ded7e4666d028c07e5a3dd06d7a300f3f5d9815bd1a,PodSandboxId:e130e2b91c88169054068a8c1954e9e72f7652c668f166882ad2e64d9b35d929,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1764185798022440827,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-p6gkd,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c82a99a0-a467-4df0-8c9c-5b91d82b7c2c,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f5cfada7e968356cd9f43fd22734f1215f0689883dc41088e7749e552cca56,PodSandboxId:157f9cb5e65f4aadd5f806c7710bee54cbcd39729d2abc02ec7fd597e2332159,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764185793626661239,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0787d04-501a-496c-8f26-3ecc20b7f3f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac0a7ddcca4f6ddea2e8dbd02f945a65c656c5197043e0794e89ecca0f9ba35e,PodSandboxId:b111919f515bb107f30a72367481b2ad00231ccc00abc472e0275b5773f0bafc,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},
Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764185766088243456,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zt7pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa55995-0947-4f78-957d-397eb61020a5,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6560862725d64d696c066e82421c3884ddf391f1b99d75d5dd1cd0f58c1b7171,PodSandboxId:beb1aaf56a9902d3e869b77102b5fb5a84429746788ea9e70c7824773c07a8ba,Metadata:&ContainerMetadata
{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764185762423846869,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4aa788b3-9723-47cc-bc23-a6c4b4b2c70d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e80d4f44f2fa1846c12019372134bbc488b52b1bf782994f77475b273b5ba0c2,PodSandboxId:45857bd9cf26136267ebdd22104ff9a9ca6338f23dbb3f904a5f7c40737056c2,Metadata:&ContainerMetadata{Name:coredn
s,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764185755165528956,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6rsq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea66335-1b9c-4fc6-8209-2b1db648b79f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1122a8f344ffeb1984b48a8747125afcf335fc5a63852549889a34e589dbdd4,PodSandboxId:6b5744ddd945ed364619d478d9078f8b816d04184d4c1a2c8d782c4a209ad500,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764185754384804082,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qcc2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c819ab2-e6e9-4eab-a9fe-f9bcdb82f78b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:039227d5c32668d7a544e8dab27d32bdcf7a9493668abe7212bccfe5a90a90ad,PodSandboxId:8458f2d683addadc809c074ee3e60968b8338397c423646019270e6ca248d596,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764185742633940598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33f116ea56476240aa27cfbf6746e3fb,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"T
CP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5be95f29d66ce013ac18e78d2b0377cc9ae86b14f1ef875c74630026b4a5651f,PodSandboxId:8c92f4c64e71110c7c6279c6db27b847b287d84cd978533f96bdbdebf53a4e5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764185742396560035,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3113d9f2abbedcec3e63d05ab89f093,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0bebab37664092b74b019d66b6138297725eedc3ae1f6f34da142e4e366a169,PodSandboxId:5f66c1af5779926c722ab379bc0e661342a3a7e5dafeeaf0f21fd98b328d8ccc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1764185742232176926,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a85665ce676
3c8ae58422727e16d0b19,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81f4d14092622fa6ce01768db00090dc0df11716cb9afc5a98f33684c32db95,PodSandboxId:bb05b454535e0836da60a5060942a7c38d0282c9389a977d97c31fd2f28b8836,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764185742162776588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name:
kube-apiserver-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b81724d889d88c6b1304a265b5f3c84,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98597494-f999-4f5e-9b2c-c49bb17ffdb3 name=/runtime.v1.RuntimeService/ListContainers
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.414686686Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b8620abf-ba74-4da9-afd1-3738fc6173a3 name=/runtime.v1.RuntimeService/Version
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.414786996Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8620abf-ba74-4da9-afd1-3738fc6173a3 name=/runtime.v1.RuntimeService/Version
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.416074819Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9fa8bb05-df6e-4b32-b514-ebd6fd1b7603 name=/runtime.v1.ImageService/ImageFsInfo
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.417525659Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764186026417498887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9fa8bb05-df6e-4b32-b514-ebd6fd1b7603 name=/runtime.v1.ImageService/ImageFsInfo
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.418772850Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a247095-6db0-439a-a430-131bb0d3f8a1 name=/runtime.v1.RuntimeService/ListContainers
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.418826093Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a247095-6db0-439a-a430-131bb0d3f8a1 name=/runtime.v1.RuntimeService/ListContainers
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.419289166Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d687b1cb2df562e6432704428175fe90e123a2ab6d6b328bf7647c520c27a014,PodSandboxId:117709cb2d9a99b6a122c70ad16d593f25c596424c2f42c0a849877584578945,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764185882028877693,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cb70cb39-5ff1-4d2b-b014-86048256ca26,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dcc6e175d9f7e28a744e9f7320c1aead308bd08082d3013028cd9b54bd13471,PodSandboxId:a702030515d60f4a28f4e3dbea4be830f9053c8aa7b80c925cc1e3232c7ba49b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764185843889580516,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b398ed93-d3e3-42f0-9ff8-eb0a88b0786a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d8e34a5c12ee5bd5870e5fde9ddbd80fb4f59a62e257b63644bce1d3dadd28a,PodSandboxId:9f66c7c3ecc73fe5bdbf87ab63e873a1c11cfc56983a354e5833a25a3759eccd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764185834250659196,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-dg8xd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 21f3aeeb-571c-46e7-a767-3ebdf23216ba,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8fe1ba22354a6d44e7434a4850160b53d2fa3b4b2ecd587f25a1eda6ed9eba5a,PodSandboxId:d9b1194cb11d531db60527c495a37a19111b0b178f42626fc560d9931e6eada4,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1764185825280374108,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cjbkr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 73f9be4a-f818-4127-8255-899e6c553774,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:557a2d941c8b530b3be0bf133e15ba9c2cf242beb634d3ec48c54ae49a910a0b,PodSandboxId:d1ef08ee18a309e28a2cdd922207017eda217d545de266fd218e7e14e5e19bb9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764185810222114056,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7hjrv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 37c4d2b9-d287-4912-812c-3a3720e73da3,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d295efad8604d4f06038ded7e4666d028c07e5a3dd06d7a300f3f5d9815bd1a,PodSandboxId:e130e2b91c88169054068a8c1954e9e72f7652c668f166882ad2e64d9b35d929,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1764185798022440827,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-p6gkd,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c82a99a0-a467-4df0-8c9c-5b91d82b7c2c,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f5cfada7e968356cd9f43fd22734f1215f0689883dc41088e7749e552cca56,PodSandboxId:157f9cb5e65f4aadd5f806c7710bee54cbcd39729d2abc02ec7fd597e2332159,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764185793626661239,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0787d04-501a-496c-8f26-3ecc20b7f3f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac0a7ddcca4f6ddea2e8dbd02f945a65c656c5197043e0794e89ecca0f9ba35e,PodSandboxId:b111919f515bb107f30a72367481b2ad00231ccc00abc472e0275b5773f0bafc,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},
Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764185766088243456,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zt7pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa55995-0947-4f78-957d-397eb61020a5,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6560862725d64d696c066e82421c3884ddf391f1b99d75d5dd1cd0f58c1b7171,PodSandboxId:beb1aaf56a9902d3e869b77102b5fb5a84429746788ea9e70c7824773c07a8ba,Metadata:&ContainerMetadata
{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764185762423846869,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4aa788b3-9723-47cc-bc23-a6c4b4b2c70d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e80d4f44f2fa1846c12019372134bbc488b52b1bf782994f77475b273b5ba0c2,PodSandboxId:45857bd9cf26136267ebdd22104ff9a9ca6338f23dbb3f904a5f7c40737056c2,Metadata:&ContainerMetadata{Name:coredn
s,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764185755165528956,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6rsq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea66335-1b9c-4fc6-8209-2b1db648b79f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1122a8f344ffeb1984b48a8747125afcf335fc5a63852549889a34e589dbdd4,PodSandboxId:6b5744ddd945ed364619d478d9078f8b816d04184d4c1a2c8d782c4a209ad500,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764185754384804082,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qcc2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c819ab2-e6e9-4eab-a9fe-f9bcdb82f78b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:039227d5c32668d7a544e8dab27d32bdcf7a9493668abe7212bccfe5a90a90ad,PodSandboxId:8458f2d683addadc809c074ee3e60968b8338397c423646019270e6ca248d596,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764185742633940598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33f116ea56476240aa27cfbf6746e3fb,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"T
CP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5be95f29d66ce013ac18e78d2b0377cc9ae86b14f1ef875c74630026b4a5651f,PodSandboxId:8c92f4c64e71110c7c6279c6db27b847b287d84cd978533f96bdbdebf53a4e5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764185742396560035,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3113d9f2abbedcec3e63d05ab89f093,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0bebab37664092b74b019d66b6138297725eedc3ae1f6f34da142e4e366a169,PodSandboxId:5f66c1af5779926c722ab379bc0e661342a3a7e5dafeeaf0f21fd98b328d8ccc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1764185742232176926,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a85665ce676
3c8ae58422727e16d0b19,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81f4d14092622fa6ce01768db00090dc0df11716cb9afc5a98f33684c32db95,PodSandboxId:bb05b454535e0836da60a5060942a7c38d0282c9389a977d97c31fd2f28b8836,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764185742162776588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name:
kube-apiserver-addons-198878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b81724d889d88c6b1304a265b5f3c84,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a247095-6db0-439a-a430-131bb0d3f8a1 name=/runtime.v1.RuntimeService/ListContainers
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.431757076Z" level=debug msg="ImagePull (2): docker.io/kicbase/echo-server:1.0 (sha256:a055a10ed683d0944c17c642f7cf3259b524ceb32317ec887513b018e67aed1e): 2135952 bytes (100.00%)" file="server/image_pull.go:276" id=e6da90bb-fbde-4165-a7c7-fbd8ecfa7842 name=/runtime.v1.ImageService/PullImage
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.431999310Z" level=debug msg="No compression detected" file="compression/compression.go:133"
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.432232103Z" level=debug msg="Compression change for blob sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30 (\"application/vnd.docker.container.image.v1+json\") not supported" file="copy/compression.go:91"
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.432266579Z" level=debug msg="Using original blob without modification" file="copy/compression.go:226"
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.432748397Z" level=debug msg="ImagePull (0): docker.io/kicbase/echo-server:1.0 (sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30): 0 bytes (0.00%)" file="server/image_pull.go:276" id=e6da90bb-fbde-4165-a7c7-fbd8ecfa7842 name=/runtime.v1.ImageService/PullImage
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.446626696Z" level=debug msg="ImagePull (2): docker.io/kicbase/echo-server:1.0 (sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30): 1197 bytes (100.00%)" file="server/image_pull.go:276" id=e6da90bb-fbde-4165-a7c7-fbd8ecfa7842 name=/runtime.v1.ImageService/PullImage
Nov 26 19:40:26 addons-198878 crio[819]: time="2025-11-26 19:40:26.446911272Z" level=debug msg="setting image creation date to 2022-07-10 23:15:54.185884751 +0000 UTC" file="storage/storage_dest.go:775"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
d687b1cb2df56 docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 2 minutes ago Running nginx 0 117709cb2d9a9 nginx default
3dcc6e175d9f7 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 3 minutes ago Running busybox 0 a702030515d60 busybox default
4d8e34a5c12ee registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27 3 minutes ago Running controller 0 9f66c7c3ecc73 ingress-nginx-controller-6c8bf45fb-dg8xd ingress-nginx
8fe1ba22354a6 884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45 3 minutes ago Exited patch 2 d9b1194cb11d5 ingress-nginx-admission-patch-cjbkr ingress-nginx
557a2d941c8b5 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f 3 minutes ago Exited create 0 d1ef08ee18a30 ingress-nginx-admission-create-7hjrv ingress-nginx
3d295efad8604 docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef 3 minutes ago Running local-path-provisioner 0 e130e2b91c881 local-path-provisioner-648f6765c9-p6gkd local-path-storage
55f5cfada7e96 docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 3 minutes ago Running minikube-ingress-dns 0 157f9cb5e65f4 kube-ingress-dns-minikube kube-system
ac0a7ddcca4f6 docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 b111919f515bb amd-gpu-device-plugin-zt7pv kube-system
6560862725d64 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 beb1aaf56a990 storage-provisioner kube-system
e80d4f44f2fa1 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 45857bd9cf261 coredns-66bc5c9577-6rsq5 kube-system
d1122a8f344ff fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7 4 minutes ago Running kube-proxy 0 6b5744ddd945e kube-proxy-qcc2j kube-system
039227d5c3266 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813 4 minutes ago Running kube-scheduler 0 8458f2d683add kube-scheduler-addons-198878 kube-system
5be95f29d66ce c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f 4 minutes ago Running kube-controller-manager 0 8c92f4c64e711 kube-controller-manager-addons-198878 kube-system
c0bebab376640 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115 4 minutes ago Running etcd 0 5f66c1af57799 etcd-addons-198878 kube-system
a81f4d1409262 c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97 4 minutes ago Running kube-apiserver 0 bb05b454535e0 kube-apiserver-addons-198878 kube-system
==> coredns [e80d4f44f2fa1846c12019372134bbc488b52b1bf782994f77475b273b5ba0c2] <==
[INFO] 10.244.0.8:33561 - 30151 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000298692s
[INFO] 10.244.0.8:33561 - 64276 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000120548s
[INFO] 10.244.0.8:33561 - 37248 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000112972s
[INFO] 10.244.0.8:33561 - 69 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000113303s
[INFO] 10.244.0.8:33561 - 8021 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.0001298s
[INFO] 10.244.0.8:33561 - 61068 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000275043s
[INFO] 10.244.0.8:33561 - 36733 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000083602s
[INFO] 10.244.0.8:48716 - 54185 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00011778s
[INFO] 10.244.0.8:48716 - 54412 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000139693s
[INFO] 10.244.0.8:51224 - 20145 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085443s
[INFO] 10.244.0.8:51224 - 20419 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059093s
[INFO] 10.244.0.8:53670 - 49812 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000116573s
[INFO] 10.244.0.8:53670 - 50095 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000093052s
[INFO] 10.244.0.8:44853 - 31586 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000108098s
[INFO] 10.244.0.8:44853 - 31773 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000066283s
[INFO] 10.244.0.23:52964 - 12280 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00043788s
[INFO] 10.244.0.23:49117 - 28544 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000242505s
[INFO] 10.244.0.23:49898 - 20618 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000144907s
[INFO] 10.244.0.23:56055 - 12240 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000260249s
[INFO] 10.244.0.23:42306 - 48219 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000164159s
[INFO] 10.244.0.23:36251 - 32931 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00083501s
[INFO] 10.244.0.23:60569 - 28083 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001313397s
[INFO] 10.244.0.23:41101 - 63791 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.00394348s
[INFO] 10.244.0.29:55555 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001709973s
[INFO] 10.244.0.29:33895 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000169502s
==> describe nodes <==
Name: addons-198878
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-198878
kubernetes.io/os=linux
minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
minikube.k8s.io/name=addons-198878
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_11_26T19_35_48_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-198878
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 26 Nov 2025 19:35:45 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-198878
AcquireTime: <unset>
RenewTime: Wed, 26 Nov 2025 19:40:24 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 26 Nov 2025 19:38:52 +0000 Wed, 26 Nov 2025 19:35:43 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 26 Nov 2025 19:38:52 +0000 Wed, 26 Nov 2025 19:35:43 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 26 Nov 2025 19:38:52 +0000 Wed, 26 Nov 2025 19:35:43 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 26 Nov 2025 19:38:52 +0000 Wed, 26 Nov 2025 19:35:49 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.123
Hostname: addons-198878
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
System Info:
Machine ID: 3a31c91d5706460a99595cc9b1ab6144
System UUID: 3a31c91d-5706-460a-9959-5cc9b1ab6144
Boot ID: bc5d73a4-1281-4d1e-819c-176197babf67
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m6s
default hello-world-app-5d498dc89-tkxwx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 1s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m27s
ingress-nginx ingress-nginx-controller-6c8bf45fb-dg8xd 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m24s
kube-system amd-gpu-device-plugin-zt7pv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m30s
kube-system coredns-66bc5c9577-6rsq5 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m33s
kube-system etcd-addons-198878 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m38s
kube-system kube-apiserver-addons-198878 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m38s
kube-system kube-controller-manager-addons-198878 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m38s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m27s
kube-system kube-proxy-qcc2j 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m33s
kube-system kube-scheduler-addons-198878 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m40s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m27s
local-path-storage local-path-provisioner-648f6765c9-p6gkd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m26s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m31s kube-proxy
Normal NodeHasSufficientMemory 4m46s (x8 over 4m46s) kubelet Node addons-198878 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m46s (x8 over 4m46s) kubelet Node addons-198878 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m46s (x7 over 4m46s) kubelet Node addons-198878 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m46s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m38s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m38s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m38s kubelet Node addons-198878 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m38s kubelet Node addons-198878 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m38s kubelet Node addons-198878 status is now: NodeHasSufficientPID
Normal NodeReady 4m37s kubelet Node addons-198878 status is now: NodeReady
Normal RegisteredNode 4m34s node-controller Node addons-198878 event: Registered Node addons-198878 in Controller
==> dmesg <==
[Nov26 19:36] kauditd_printk_skb: 344 callbacks suppressed
[ +5.630397] kauditd_printk_skb: 347 callbacks suppressed
[ +7.669119] kauditd_printk_skb: 32 callbacks suppressed
[ +5.772707] kauditd_printk_skb: 32 callbacks suppressed
[ +8.286009] kauditd_printk_skb: 17 callbacks suppressed
[ +9.482407] kauditd_printk_skb: 41 callbacks suppressed
[ +5.283182] kauditd_printk_skb: 131 callbacks suppressed
[ +0.904571] kauditd_printk_skb: 129 callbacks suppressed
[ +5.785327] kauditd_printk_skb: 77 callbacks suppressed
[Nov26 19:37] kauditd_printk_skb: 5 callbacks suppressed
[ +0.000147] kauditd_printk_skb: 65 callbacks suppressed
[ +5.189870] kauditd_printk_skb: 53 callbacks suppressed
[ +2.607761] kauditd_printk_skb: 47 callbacks suppressed
[ +10.470165] kauditd_printk_skb: 17 callbacks suppressed
[ +0.001321] kauditd_printk_skb: 22 callbacks suppressed
[ +1.425793] kauditd_printk_skb: 107 callbacks suppressed
[ +1.016995] kauditd_printk_skb: 108 callbacks suppressed
[ +0.850761] kauditd_printk_skb: 172 callbacks suppressed
[Nov26 19:38] kauditd_printk_skb: 133 callbacks suppressed
[ +1.395834] kauditd_printk_skb: 48 callbacks suppressed
[ +0.000045] kauditd_printk_skb: 10 callbacks suppressed
[ +11.973808] kauditd_printk_skb: 41 callbacks suppressed
[ +0.000091] kauditd_printk_skb: 10 callbacks suppressed
[ +7.453461] kauditd_printk_skb: 41 callbacks suppressed
[Nov26 19:40] kauditd_printk_skb: 127 callbacks suppressed
==> etcd [c0bebab37664092b74b019d66b6138297725eedc3ae1f6f34da142e4e366a169] <==
{"level":"info","ts":"2025-11-26T19:36:51.741174Z","caller":"traceutil/trace.go:172","msg":"trace[1007688485] range","detail":"{range_begin:/registry/events/ingress-nginx/ingress-nginx-admission-patch-cjbkr.187ba5a5c1cb530b; range_end:; response_count:1; response_revision:1082; }","duration":"323.327378ms","start":"2025-11-26T19:36:51.417785Z","end":"2025-11-26T19:36:51.741112Z","steps":["trace[1007688485] 'agreement among raft nodes before linearized reading' (duration: 323.175288ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-26T19:36:51.741202Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-26T19:36:51.417766Z","time spent":"323.42907ms","remote":"127.0.0.1:48074","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":1,"response size":922,"request content":"key:\"/registry/events/ingress-nginx/ingress-nginx-admission-patch-cjbkr.187ba5a5c1cb530b\" limit:1 "}
{"level":"warn","ts":"2025-11-26T19:36:51.741418Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"264.03278ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-26T19:36:51.741437Z","caller":"traceutil/trace.go:172","msg":"trace[235327979] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1082; }","duration":"264.052529ms","start":"2025-11-26T19:36:51.477379Z","end":"2025-11-26T19:36:51.741432Z","steps":["trace[235327979] 'agreement among raft nodes before linearized reading' (duration: 264.016233ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-26T19:36:51.741557Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.132994ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-26T19:36:51.741594Z","caller":"traceutil/trace.go:172","msg":"trace[2147001097] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1082; }","duration":"146.168737ms","start":"2025-11-26T19:36:51.595420Z","end":"2025-11-26T19:36:51.741589Z","steps":["trace[2147001097] 'agreement among raft nodes before linearized reading' (duration: 146.127019ms)"],"step_count":1}
{"level":"info","ts":"2025-11-26T19:37:05.591379Z","caller":"traceutil/trace.go:172","msg":"trace[1894609465] linearizableReadLoop","detail":"{readStateIndex:1192; appliedIndex:1192; }","duration":"272.066417ms","start":"2025-11-26T19:37:05.319189Z","end":"2025-11-26T19:37:05.591256Z","steps":["trace[1894609465] 'read index received' (duration: 272.060004ms)","trace[1894609465] 'applied index is now lower than readState.Index' (duration: 5.495µs)"],"step_count":2}
{"level":"info","ts":"2025-11-26T19:37:05.591520Z","caller":"traceutil/trace.go:172","msg":"trace[1272142295] transaction","detail":"{read_only:false; response_revision:1159; number_of_response:1; }","duration":"315.442107ms","start":"2025-11-26T19:37:05.276068Z","end":"2025-11-26T19:37:05.591510Z","steps":["trace[1272142295] 'process raft request' (duration: 315.226614ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-26T19:37:05.591868Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"272.658913ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-cjbkr\" limit:1 ","response":"range_response_count:1 size:4885"}
{"level":"info","ts":"2025-11-26T19:37:05.591922Z","caller":"traceutil/trace.go:172","msg":"trace[1249583433] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-cjbkr; range_end:; response_count:1; response_revision:1159; }","duration":"272.720048ms","start":"2025-11-26T19:37:05.319186Z","end":"2025-11-26T19:37:05.591906Z","steps":["trace[1249583433] 'agreement among raft nodes before linearized reading' (duration: 272.559972ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-26T19:37:05.591945Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-26T19:37:05.275973Z","time spent":"315.879185ms","remote":"127.0.0.1:48074","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":919,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-admission-patch-cjbkr.187ba5a5a85601c4\" mod_revision:1068 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-admission-patch-cjbkr.187ba5a5a85601c4\" value_size:818 lease:6421740275250318311 >> failure:<request_range:<key:\"/registry/events/ingress-nginx/ingress-nginx-admission-patch-cjbkr.187ba5a5a85601c4\" > >"}
{"level":"warn","ts":"2025-11-26T19:37:05.592100Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"235.967541ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-26T19:37:05.592123Z","caller":"traceutil/trace.go:172","msg":"trace[476598973] range","detail":"{range_begin:/registry/prioritylevelconfigurations; range_end:; response_count:0; response_revision:1159; }","duration":"235.990347ms","start":"2025-11-26T19:37:05.356125Z","end":"2025-11-26T19:37:05.592116Z","steps":["trace[476598973] 'agreement among raft nodes before linearized reading' (duration: 235.94434ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-26T19:37:05.592265Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"232.704999ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-26T19:37:05.592290Z","caller":"traceutil/trace.go:172","msg":"trace[1363427135] range","detail":"{range_begin:/registry/statefulsets; range_end:; response_count:0; response_revision:1159; }","duration":"232.730943ms","start":"2025-11-26T19:37:05.359552Z","end":"2025-11-26T19:37:05.592283Z","steps":["trace[1363427135] 'agreement among raft nodes before linearized reading' (duration: 232.677734ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-26T19:37:05.592405Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.907902ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-26T19:37:05.592428Z","caller":"traceutil/trace.go:172","msg":"trace[732722930] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1159; }","duration":"113.93171ms","start":"2025-11-26T19:37:05.478490Z","end":"2025-11-26T19:37:05.592422Z","steps":["trace[732722930] 'agreement among raft nodes before linearized reading' (duration: 113.890718ms)"],"step_count":1}
{"level":"info","ts":"2025-11-26T19:37:44.587834Z","caller":"traceutil/trace.go:172","msg":"trace[1046687004] transaction","detail":"{read_only:false; response_revision:1379; number_of_response:1; }","duration":"229.181041ms","start":"2025-11-26T19:37:44.358573Z","end":"2025-11-26T19:37:44.587754Z","steps":["trace[1046687004] 'process raft request' (duration: 229.090356ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-26T19:37:46.652164Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"275.878472ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6421740275250319562 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-rhjld\" mod_revision:1396 > success:<request_delete_range:<key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-rhjld\" > > failure:<request_range:<key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-rhjld\" > >>","response":"size:18"}
{"level":"info","ts":"2025-11-26T19:37:46.653523Z","caller":"traceutil/trace.go:172","msg":"trace[179650896] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1397; }","duration":"365.411173ms","start":"2025-11-26T19:37:46.288098Z","end":"2025-11-26T19:37:46.653509Z","steps":["trace[179650896] 'process raft request' (duration: 88.06036ms)","trace[179650896] 'compare' (duration: 275.691291ms)"],"step_count":2}
{"level":"info","ts":"2025-11-26T19:37:46.653913Z","caller":"traceutil/trace.go:172","msg":"trace[1200581374] transaction","detail":"{read_only:false; response_revision:1398; number_of_response:1; }","duration":"287.673652ms","start":"2025-11-26T19:37:46.364684Z","end":"2025-11-26T19:37:46.652358Z","steps":["trace[1200581374] 'process raft request' (duration: 287.576948ms)"],"step_count":1}
{"level":"info","ts":"2025-11-26T19:37:46.654080Z","caller":"traceutil/trace.go:172","msg":"trace[1873023201] linearizableReadLoop","detail":"{readStateIndex:1440; appliedIndex:1439; }","duration":"202.792718ms","start":"2025-11-26T19:37:46.451273Z","end":"2025-11-26T19:37:46.654065Z","steps":["trace[1873023201] 'read index received' (duration: 196.917087ms)","trace[1873023201] 'applied index is now lower than readState.Index' (duration: 5.874068ms)"],"step_count":2}
{"level":"warn","ts":"2025-11-26T19:37:46.654871Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-26T19:37:46.288079Z","time spent":"365.64963ms","remote":"127.0.0.1:48312","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":67,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-rhjld\" mod_revision:1396 > success:<request_delete_range:<key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-rhjld\" > > failure:<request_range:<key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-rhjld\" > >"}
{"level":"warn","ts":"2025-11-26T19:37:46.654891Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"203.627757ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-26T19:37:46.654955Z","caller":"traceutil/trace.go:172","msg":"trace[1767948582] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1398; }","duration":"203.694123ms","start":"2025-11-26T19:37:46.451250Z","end":"2025-11-26T19:37:46.654944Z","steps":["trace[1767948582] 'agreement among raft nodes before linearized reading' (duration: 203.487208ms)"],"step_count":1}
==> kernel <==
19:40:26 up 5 min, 0 users, load average: 1.45, 1.66, 0.83
Linux addons-198878 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [a81f4d14092622fa6ce01768db00090dc0df11716cb9afc5a98f33684c32db95] <==
W1126 19:36:22.406549 1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1126 19:36:22.433489 1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1126 19:36:22.463651 1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1126 19:36:22.493996 1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
E1126 19:37:31.146399 1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:54072: use of closed network connection
E1126 19:37:31.340228 1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:54098: use of closed network connection
I1126 19:37:40.828920 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.46.176"}
I1126 19:37:58.927209 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1126 19:37:59.125631 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.159.75"}
I1126 19:38:16.812734 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1126 19:38:27.326580 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I1126 19:38:43.962015 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1126 19:38:43.962080 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1126 19:38:44.008803 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1126 19:38:44.008997 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1126 19:38:44.020692 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1126 19:38:44.022428 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1126 19:38:44.035413 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1126 19:38:44.035500 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1126 19:38:44.072883 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1126 19:38:44.072911 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1126 19:38:45.021167 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1126 19:38:45.076377 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1126 19:38:45.108956 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I1126 19:40:25.203386 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.157.132"}
==> kube-controller-manager [5be95f29d66ce013ac18e78d2b0377cc9ae86b14f1ef875c74630026b4a5651f] <==
E1126 19:38:54.125961 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1126 19:38:54.215262 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1126 19:38:54.216553 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1126 19:39:01.336366 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1126 19:39:01.337624 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1126 19:39:02.725192 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1126 19:39:02.726402 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1126 19:39:06.552147 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1126 19:39:06.553126 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1126 19:39:17.540525 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1126 19:39:17.541508 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1126 19:39:17.569449 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1126 19:39:17.570621 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1126 19:39:19.917762 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1126 19:39:19.919000 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1126 19:39:43.816572 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1126 19:39:43.817670 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1126 19:39:52.278380 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1126 19:39:52.279667 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1126 19:39:59.481715 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1126 19:39:59.482982 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1126 19:40:16.608635 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1126 19:40:16.609775 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1126 19:40:23.806495 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1126 19:40:23.808124 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [d1122a8f344ffeb1984b48a8747125afcf335fc5a63852549889a34e589dbdd4] <==
I1126 19:35:55.182556 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1126 19:35:55.283149 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1126 19:35:55.284422 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.123"]
E1126 19:35:55.285000 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1126 19:35:55.694536 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1126 19:35:55.694748 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1126 19:35:55.694776 1 server_linux.go:132] "Using iptables Proxier"
I1126 19:35:55.712783 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1126 19:35:55.714579 1 server.go:527] "Version info" version="v1.34.1"
I1126 19:35:55.716073 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1126 19:35:55.742097 1 config.go:200] "Starting service config controller"
I1126 19:35:55.742136 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1126 19:35:55.745083 1 config.go:106] "Starting endpoint slice config controller"
I1126 19:35:55.745121 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1126 19:35:55.745383 1 config.go:403] "Starting serviceCIDR config controller"
I1126 19:35:55.745410 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1126 19:35:55.750732 1 config.go:309] "Starting node config controller"
I1126 19:35:55.750764 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1126 19:35:55.750771 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1126 19:35:55.843123 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1126 19:35:55.845394 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1126 19:35:55.845826 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
==> kube-scheduler [039227d5c32668d7a544e8dab27d32bdcf7a9493668abe7212bccfe5a90a90ad] <==
E1126 19:35:45.453906 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1126 19:35:45.454008 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1126 19:35:45.454095 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1126 19:35:45.454435 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1126 19:35:45.454532 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1126 19:35:45.455439 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1126 19:35:45.461560 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1126 19:35:45.461655 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1126 19:35:46.305266 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1126 19:35:46.338366 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1126 19:35:46.349551 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1126 19:35:46.407909 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1126 19:35:46.425909 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1126 19:35:46.452843 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1126 19:35:46.464245 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1126 19:35:46.477008 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1126 19:35:46.548444 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1126 19:35:46.575421 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1126 19:35:46.645344 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1126 19:35:46.701189 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1126 19:35:46.732095 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1126 19:35:46.789168 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1126 19:35:46.833947 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1126 19:35:46.933536 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
I1126 19:35:49.231943 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Nov 26 19:38:48 addons-198878 kubelet[1502]: E1126 19:38:48.543452 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764185928542529210 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588567} inodes_used:{value:201}}"
Nov 26 19:38:48 addons-198878 kubelet[1502]: E1126 19:38:48.543541 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764185928542529210 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588567} inodes_used:{value:201}}"
Nov 26 19:38:49 addons-198878 kubelet[1502]: I1126 19:38:49.429382 1502 scope.go:117] "RemoveContainer" containerID="ad378d4480d0f0322e2566cc4e18f336840698be845a75feb11dba46fa939cf0"
Nov 26 19:38:49 addons-198878 kubelet[1502]: I1126 19:38:49.552905 1502 scope.go:117] "RemoveContainer" containerID="161d0d203b8ccba0beae02de17c2d8098e2d82c463e97f5c1520cd938a346ef4"
Nov 26 19:38:58 addons-198878 kubelet[1502]: E1126 19:38:58.547405 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764185938546137515 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588567} inodes_used:{value:201}}"
Nov 26 19:38:58 addons-198878 kubelet[1502]: E1126 19:38:58.547436 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764185938546137515 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588567} inodes_used:{value:201}}"
Nov 26 19:39:08 addons-198878 kubelet[1502]: E1126 19:39:08.550388 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764185948549778808 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588567} inodes_used:{value:201}}"
Nov 26 19:39:08 addons-198878 kubelet[1502]: E1126 19:39:08.550437 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764185948549778808 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588567} inodes_used:{value:201}}"
Nov 26 19:39:18 addons-198878 kubelet[1502]: E1126 19:39:18.553894 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764185958553082825 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588567} inodes_used:{value:201}}"
Nov 26 19:39:18 addons-198878 kubelet[1502]: E1126 19:39:18.553927 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764185958553082825 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588567} inodes_used:{value:201}}"
Nov 26 19:39:28 addons-198878 kubelet[1502]: E1126 19:39:28.557085 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764185968556739123 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588567} inodes_used:{value:201}}"
Nov 26 19:39:28 addons-198878 kubelet[1502]: E1126 19:39:28.557109 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764185968556739123 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588567} inodes_used:{value:201}}"
Nov 26 19:39:38 addons-198878 kubelet[1502]: E1126 19:39:38.563029 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764185978560693859 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588567} inodes_used:{value:201}}"
Nov 26 19:39:38 addons-198878 kubelet[1502]: E1126 19:39:38.563653 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764185978560693859 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588567} inodes_used:{value:201}}"
Nov 26 19:39:48 addons-198878 kubelet[1502]: E1126 19:39:48.566684 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764185988566054156 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588567} inodes_used:{value:201}}"
Nov 26 19:39:48 addons-198878 kubelet[1502]: E1126 19:39:48.566718 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764185988566054156 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588567} inodes_used:{value:201}}"
Nov 26 19:39:58 addons-198878 kubelet[1502]: I1126 19:39:58.266875 1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-zt7pv" secret="" err="secret \"gcp-auth\" not found"
Nov 26 19:39:58 addons-198878 kubelet[1502]: E1126 19:39:58.570348 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764185998569798236 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588567} inodes_used:{value:201}}"
Nov 26 19:39:58 addons-198878 kubelet[1502]: E1126 19:39:58.570371 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764185998569798236 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588567} inodes_used:{value:201}}"
Nov 26 19:40:08 addons-198878 kubelet[1502]: E1126 19:40:08.573026 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764186008572425379 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588567} inodes_used:{value:201}}"
Nov 26 19:40:08 addons-198878 kubelet[1502]: E1126 19:40:08.573083 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764186008572425379 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588567} inodes_used:{value:201}}"
Nov 26 19:40:09 addons-198878 kubelet[1502]: I1126 19:40:09.266060 1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Nov 26 19:40:18 addons-198878 kubelet[1502]: E1126 19:40:18.576451 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764186018575972021 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588567} inodes_used:{value:201}}"
Nov 26 19:40:18 addons-198878 kubelet[1502]: E1126 19:40:18.576495 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764186018575972021 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588567} inodes_used:{value:201}}"
Nov 26 19:40:25 addons-198878 kubelet[1502]: I1126 19:40:25.234019 1502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j6jh\" (UniqueName: \"kubernetes.io/projected/f3b2ee9c-5d99-4c4f-b718-3209c64f7159-kube-api-access-6j6jh\") pod \"hello-world-app-5d498dc89-tkxwx\" (UID: \"f3b2ee9c-5d99-4c4f-b718-3209c64f7159\") " pod="default/hello-world-app-5d498dc89-tkxwx"
==> storage-provisioner [6560862725d64d696c066e82421c3884ddf391f1b99d75d5dd1cd0f58c1b7171] <==
W1126 19:40:01.628095 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:03.632368 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:03.640272 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:05.643675 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:05.648846 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:07.652492 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:07.660807 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:09.664062 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:09.669929 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:11.673933 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:11.680845 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:13.684982 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:13.691447 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:15.695241 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:15.703148 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:17.706744 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:17.711504 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:19.715568 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:19.723397 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:21.728582 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:21.735911 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:23.741847 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:23.749906 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:25.753465 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1126 19:40:25.760411 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-198878 -n addons-198878
helpers_test.go:269: (dbg) Run: kubectl --context addons-198878 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-7hjrv ingress-nginx-admission-patch-cjbkr
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context addons-198878 describe pod ingress-nginx-admission-create-7hjrv ingress-nginx-admission-patch-cjbkr
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-198878 describe pod ingress-nginx-admission-create-7hjrv ingress-nginx-admission-patch-cjbkr: exit status 1 (59.837946ms)
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-7hjrv" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-cjbkr" not found
** /stderr **
helpers_test.go:287: kubectl --context addons-198878 describe pod ingress-nginx-admission-create-7hjrv ingress-nginx-admission-patch-cjbkr: exit status 1
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-198878 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-198878 addons disable ingress-dns --alsologtostderr -v=1: (1.347924462s)
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-198878 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-198878 addons disable ingress --alsologtostderr -v=1: (7.741804471s)
--- FAIL: TestAddons/parallel/Ingress (158.06s)