=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run: kubectl --context addons-631036 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run: kubectl --context addons-631036 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run: kubectl --context addons-631036 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [2c09d0ba-4bcf-41ee-a6df-0ac2dfc801a8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [2c09d0ba-4bcf-41ee-a6df-0ac2dfc801a8] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.005101395s
I1025 08:33:07.061666 9881 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run: out/minikube-linux-amd64 -p addons-631036 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-631036 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.440324385s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run: kubectl --context addons-631036 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run: out/minikube-linux-amd64 -p addons-631036 ip
addons_test.go:299: (dbg) Run: nslookup hello-john.test 192.168.39.24
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-631036 -n addons-631036
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p addons-631036 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-631036 logs -n 25: (1.365130555s)
helpers_test.go:260: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-411797 │ download-only-411797 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
│ start │ --download-only -p binary-mirror-631240 --alsologtostderr --binary-mirror http://127.0.0.1:38089 --driver=kvm2 --container-runtime=crio │ binary-mirror-631240 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ │
│ delete │ -p binary-mirror-631240 │ binary-mirror-631240 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
│ addons │ enable dashboard -p addons-631036 │ addons-631036 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ │
│ addons │ disable dashboard -p addons-631036 │ addons-631036 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ │
│ start │ -p addons-631036 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-631036 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:32 UTC │
│ addons │ addons-631036 addons disable volcano --alsologtostderr -v=1 │ addons-631036 │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
│ addons │ addons-631036 addons disable gcp-auth --alsologtostderr -v=1 │ addons-631036 │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
│ addons │ enable headlamp -p addons-631036 --alsologtostderr -v=1 │ addons-631036 │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
│ addons │ addons-631036 addons disable metrics-server --alsologtostderr -v=1 │ addons-631036 │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
│ addons │ addons-631036 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-631036 │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
│ addons │ addons-631036 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-631036 │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
│ addons │ addons-631036 addons disable headlamp --alsologtostderr -v=1 │ addons-631036 │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
│ ip │ addons-631036 ip │ addons-631036 │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
│ addons │ addons-631036 addons disable registry --alsologtostderr -v=1 │ addons-631036 │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
│ addons │ addons-631036 addons disable yakd --alsologtostderr -v=1 │ addons-631036 │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
│ ssh │ addons-631036 ssh cat /opt/local-path-provisioner/pvc-28e1dc7b-1f5a-4207-a5b2-acbed43ab42a_default_test-pvc/file1 │ addons-631036 │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
│ addons │ addons-631036 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-631036 │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:33 UTC │
│ addons │ addons-631036 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-631036 │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │ 25 Oct 25 08:33 UTC │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-631036 │ addons-631036 │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │ 25 Oct 25 08:33 UTC │
│ addons │ addons-631036 addons disable registry-creds --alsologtostderr -v=1 │ addons-631036 │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │ 25 Oct 25 08:33 UTC │
│ ssh │ addons-631036 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-631036 │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │ │
│ addons │ addons-631036 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-631036 │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │ 25 Oct 25 08:33 UTC │
│ addons │ addons-631036 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-631036 │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │ 25 Oct 25 08:33 UTC │
│ ip │ addons-631036 ip │ addons-631036 │ jenkins │ v1.37.0 │ 25 Oct 25 08:35 UTC │ 25 Oct 25 08:35 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/10/25 08:29:40
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.24.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1025 08:29:40.695721 10463 out.go:360] Setting OutFile to fd 1 ...
I1025 08:29:40.695943 10463 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:29:40.695952 10463 out.go:374] Setting ErrFile to fd 2...
I1025 08:29:40.695957 10463 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:29:40.696135 10463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5973/.minikube/bin
I1025 08:29:40.696688 10463 out.go:368] Setting JSON to false
I1025 08:29:40.697500 10463 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":731,"bootTime":1761380250,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1025 08:29:40.697589 10463 start.go:141] virtualization: kvm guest
I1025 08:29:40.699647 10463 out.go:179] * [addons-631036] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1025 08:29:40.701325 10463 notify.go:220] Checking for updates...
I1025 08:29:40.701384 10463 out.go:179] - MINIKUBE_LOCATION=21796
I1025 08:29:40.702911 10463 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1025 08:29:40.704437 10463 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21796-5973/kubeconfig
I1025 08:29:40.705844 10463 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5973/.minikube
I1025 08:29:40.707133 10463 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1025 08:29:40.708419 10463 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1025 08:29:40.710154 10463 driver.go:421] Setting default libvirt URI to qemu:///system
I1025 08:29:40.741814 10463 out.go:179] * Using the kvm2 driver based on user configuration
I1025 08:29:40.743181 10463 start.go:305] selected driver: kvm2
I1025 08:29:40.743195 10463 start.go:925] validating driver "kvm2" against <nil>
I1025 08:29:40.743207 10463 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1025 08:29:40.743872 10463 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1025 08:29:40.744123 10463 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1025 08:29:40.744149 10463 cni.go:84] Creating CNI manager for ""
I1025 08:29:40.744192 10463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1025 08:29:40.744198 10463 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1025 08:29:40.744262 10463 start.go:349] cluster config:
{Name:addons-631036 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-631036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1025 08:29:40.744355 10463 iso.go:125] acquiring lock: {Name:mk56ae07ef3e2fe29ebca77d84768cf173c5b3d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 08:29:40.745794 10463 out.go:179] * Starting "addons-631036" primary control-plane node in "addons-631036" cluster
I1025 08:29:40.747018 10463 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1025 08:29:40.747055 10463 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
I1025 08:29:40.747069 10463 cache.go:58] Caching tarball of preloaded images
I1025 08:29:40.747192 10463 preload.go:233] Found /home/jenkins/minikube-integration/21796-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1025 08:29:40.747203 10463 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
I1025 08:29:40.747510 10463 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/config.json ...
I1025 08:29:40.747535 10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/config.json: {Name:mkcb1a921b1e0b0d5f4d452a0969ef27ecab2822 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 08:29:40.747681 10463 start.go:360] acquireMachinesLock for addons-631036: {Name:mk307ae3583c207a47794987d4930662cf65d417 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1025 08:29:40.747725 10463 start.go:364] duration metric: took 30.63µs to acquireMachinesLock for "addons-631036"
I1025 08:29:40.747742 10463 start.go:93] Provisioning new machine with config: &{Name:addons-631036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-631036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1025 08:29:40.747793 10463 start.go:125] createHost starting for "" (driver="kvm2")
I1025 08:29:40.749273 10463 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1025 08:29:40.749427 10463 start.go:159] libmachine.API.Create for "addons-631036" (driver="kvm2")
I1025 08:29:40.749455 10463 client.go:168] LocalClient.Create starting
I1025 08:29:40.749532 10463 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem
I1025 08:29:40.919015 10463 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/cert.pem
I1025 08:29:41.038521 10463 main.go:141] libmachine: creating domain...
I1025 08:29:41.038541 10463 main.go:141] libmachine: creating network...
I1025 08:29:41.039987 10463 main.go:141] libmachine: found existing default network
I1025 08:29:41.040206 10463 main.go:141] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1025 08:29:41.040764 10463 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e4ae00}
I1025 08:29:41.040853 10463 main.go:141] libmachine: defining private network:
<network>
<name>mk-addons-631036</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1025 08:29:41.046862 10463 main.go:141] libmachine: creating private network mk-addons-631036 192.168.39.0/24...
I1025 08:29:41.118578 10463 main.go:141] libmachine: private network mk-addons-631036 192.168.39.0/24 created
I1025 08:29:41.118855 10463 main.go:141] libmachine: <network>
<name>mk-addons-631036</name>
<uuid>235f9f39-a409-4b6e-a380-c479334ac67d</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:2b:63:73'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1025 08:29:41.118882 10463 main.go:141] libmachine: setting up store path in /home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036 ...
I1025 08:29:41.118904 10463 main.go:141] libmachine: building disk image from file:///home/jenkins/minikube-integration/21796-5973/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
I1025 08:29:41.118914 10463 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21796-5973/.minikube
I1025 08:29:41.119006 10463 main.go:141] libmachine: Downloading /home/jenkins/minikube-integration/21796-5973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21796-5973/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
I1025 08:29:41.378586 10463 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa...
I1025 08:29:41.897826 10463 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/addons-631036.rawdisk...
I1025 08:29:41.897866 10463 main.go:141] libmachine: Writing magic tar header
I1025 08:29:41.897890 10463 main.go:141] libmachine: Writing SSH key tar header
I1025 08:29:41.897959 10463 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036 ...
I1025 08:29:41.898023 10463 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036
I1025 08:29:41.898045 10463 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036 (perms=drwx------)
I1025 08:29:41.898057 10463 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21796-5973/.minikube/machines
I1025 08:29:41.898067 10463 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21796-5973/.minikube/machines (perms=drwxr-xr-x)
I1025 08:29:41.898078 10463 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21796-5973/.minikube
I1025 08:29:41.898089 10463 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21796-5973/.minikube (perms=drwxr-xr-x)
I1025 08:29:41.898096 10463 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21796-5973
I1025 08:29:41.898106 10463 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21796-5973 (perms=drwxrwxr-x)
I1025 08:29:41.898116 10463 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1025 08:29:41.898126 10463 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1025 08:29:41.898136 10463 main.go:141] libmachine: checking permissions on dir: /home/jenkins
I1025 08:29:41.898146 10463 main.go:141] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1025 08:29:41.898154 10463 main.go:141] libmachine: checking permissions on dir: /home
I1025 08:29:41.898163 10463 main.go:141] libmachine: skipping /home - not owner
I1025 08:29:41.898167 10463 main.go:141] libmachine: defining domain...
I1025 08:29:41.899557 10463 main.go:141] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-631036</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/addons-631036.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-631036'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1025 08:29:41.908133 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:68:81:5b in network default
I1025 08:29:41.908782 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:41.908800 10463 main.go:141] libmachine: starting domain...
I1025 08:29:41.908805 10463 main.go:141] libmachine: ensuring networks are active...
I1025 08:29:41.909603 10463 main.go:141] libmachine: Ensuring network default is active
I1025 08:29:41.909978 10463 main.go:141] libmachine: Ensuring network mk-addons-631036 is active
I1025 08:29:41.910640 10463 main.go:141] libmachine: getting domain XML...
I1025 08:29:41.911649 10463 main.go:141] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-631036</name>
<uuid>47cdcab0-e8ea-48b5-a70c-5c459d82a833</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/addons-631036.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:04:3b:0f'/>
<source network='mk-addons-631036'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:68:81:5b'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1025 08:29:43.241470 10463 main.go:141] libmachine: waiting for domain to start...
I1025 08:29:43.243012 10463 main.go:141] libmachine: domain is now running
I1025 08:29:43.243032 10463 main.go:141] libmachine: waiting for IP...
I1025 08:29:43.243808 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:43.244361 10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
I1025 08:29:43.244373 10463 main.go:141] libmachine: trying to list again with source=arp
I1025 08:29:43.244597 10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
I1025 08:29:43.244635 10463 retry.go:31] will retry after 298.209668ms: waiting for domain to come up
I1025 08:29:43.544077 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:43.544633 10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
I1025 08:29:43.544648 10463 main.go:141] libmachine: trying to list again with source=arp
I1025 08:29:43.544930 10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
I1025 08:29:43.544959 10463 retry.go:31] will retry after 253.047315ms: waiting for domain to come up
I1025 08:29:43.799355 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:43.799862 10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
I1025 08:29:43.799879 10463 main.go:141] libmachine: trying to list again with source=arp
I1025 08:29:43.800206 10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
I1025 08:29:43.800257 10463 retry.go:31] will retry after 473.795837ms: waiting for domain to come up
I1025 08:29:44.275904 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:44.276469 10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
I1025 08:29:44.276486 10463 main.go:141] libmachine: trying to list again with source=arp
I1025 08:29:44.276791 10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
I1025 08:29:44.276822 10463 retry.go:31] will retry after 408.756949ms: waiting for domain to come up
I1025 08:29:44.687846 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:44.688811 10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
I1025 08:29:44.688834 10463 main.go:141] libmachine: trying to list again with source=arp
I1025 08:29:44.689209 10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
I1025 08:29:44.689269 10463 retry.go:31] will retry after 677.09377ms: waiting for domain to come up
I1025 08:29:45.368460 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:45.369105 10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
I1025 08:29:45.369128 10463 main.go:141] libmachine: trying to list again with source=arp
I1025 08:29:45.369530 10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
I1025 08:29:45.369573 10463 retry.go:31] will retry after 930.349614ms: waiting for domain to come up
I1025 08:29:46.301443 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:46.301973 10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
I1025 08:29:46.301988 10463 main.go:141] libmachine: trying to list again with source=arp
I1025 08:29:46.302307 10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
I1025 08:29:46.302349 10463 retry.go:31] will retry after 775.285338ms: waiting for domain to come up
I1025 08:29:47.079525 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:47.080097 10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
I1025 08:29:47.080115 10463 main.go:141] libmachine: trying to list again with source=arp
I1025 08:29:47.080461 10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
I1025 08:29:47.080503 10463 retry.go:31] will retry after 1.000525447s: waiting for domain to come up
I1025 08:29:48.082690 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:48.083250 10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
I1025 08:29:48.083265 10463 main.go:141] libmachine: trying to list again with source=arp
I1025 08:29:48.083569 10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
I1025 08:29:48.083600 10463 retry.go:31] will retry after 1.700888796s: waiting for domain to come up
I1025 08:29:49.786627 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:49.787251 10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
I1025 08:29:49.787266 10463 main.go:141] libmachine: trying to list again with source=arp
I1025 08:29:49.787557 10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
I1025 08:29:49.787591 10463 retry.go:31] will retry after 2.032833179s: waiting for domain to come up
I1025 08:29:51.822183 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:51.822872 10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
I1025 08:29:51.822892 10463 main.go:141] libmachine: trying to list again with source=arp
I1025 08:29:51.823202 10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
I1025 08:29:51.823271 10463 retry.go:31] will retry after 2.195452187s: waiting for domain to come up
I1025 08:29:54.021606 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:54.022161 10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
I1025 08:29:54.022178 10463 main.go:141] libmachine: trying to list again with source=arp
I1025 08:29:54.022494 10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
I1025 08:29:54.022531 10463 retry.go:31] will retry after 3.490188088s: waiting for domain to come up
I1025 08:29:57.515359 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:57.515926 10463 main.go:141] libmachine: domain addons-631036 has current primary IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:57.515944 10463 main.go:141] libmachine: found domain IP: 192.168.39.24
I1025 08:29:57.515954 10463 main.go:141] libmachine: reserving static IP address...
I1025 08:29:57.516319 10463 main.go:141] libmachine: unable to find host DHCP lease matching {name: "addons-631036", mac: "52:54:00:04:3b:0f", ip: "192.168.39.24"} in network mk-addons-631036
I1025 08:29:57.702132 10463 main.go:141] libmachine: reserved static IP address 192.168.39.24 for domain addons-631036
I1025 08:29:57.702160 10463 main.go:141] libmachine: waiting for SSH...
I1025 08:29:57.702168 10463 main.go:141] libmachine: Getting to WaitForSSH function...
I1025 08:29:57.705210 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:57.705651 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:minikube Clientid:01:52:54:00:04:3b:0f}
I1025 08:29:57.705681 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:57.705911 10463 main.go:141] libmachine: Using SSH client type: native
I1025 08:29:57.706183 10463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1025 08:29:57.706196 10463 main.go:141] libmachine: About to run SSH command:
exit 0
I1025 08:29:57.819926 10463 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1025 08:29:57.820338 10463 main.go:141] libmachine: domain creation complete
I1025 08:29:57.821746 10463 machine.go:93] provisionDockerMachine start ...
I1025 08:29:57.823846 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:57.824291 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:29:57.824316 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:57.824498 10463 main.go:141] libmachine: Using SSH client type: native
I1025 08:29:57.824695 10463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1025 08:29:57.824706 10463 main.go:141] libmachine: About to run SSH command:
hostname
I1025 08:29:57.934463 10463 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I1025 08:29:57.934491 10463 buildroot.go:166] provisioning hostname "addons-631036"
I1025 08:29:57.937521 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:57.937965 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:29:57.937989 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:57.938201 10463 main.go:141] libmachine: Using SSH client type: native
I1025 08:29:57.938451 10463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1025 08:29:57.938465 10463 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-631036 && echo "addons-631036" | sudo tee /etc/hostname
I1025 08:29:58.069956 10463 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-631036
I1025 08:29:58.073640 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:58.074053 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:29:58.074077 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:58.074311 10463 main.go:141] libmachine: Using SSH client type: native
I1025 08:29:58.074503 10463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1025 08:29:58.074518 10463 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-631036' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-631036/g' /etc/hosts;
else
echo '127.0.1.1 addons-631036' | sudo tee -a /etc/hosts;
fi
fi
I1025 08:29:58.198346 10463 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1025 08:29:58.198378 10463 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21796-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-5973/.minikube}
I1025 08:29:58.198419 10463 buildroot.go:174] setting up certificates
I1025 08:29:58.198431 10463 provision.go:84] configureAuth start
I1025 08:29:58.202025 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:58.202475 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:29:58.202497 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:58.205253 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:58.205718 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:29:58.205749 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:58.205887 10463 provision.go:143] copyHostCerts
I1025 08:29:58.205965 10463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-5973/.minikube/cert.pem (1123 bytes)
I1025 08:29:58.206109 10463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-5973/.minikube/key.pem (1679 bytes)
I1025 08:29:58.206184 10463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-5973/.minikube/ca.pem (1078 bytes)
I1025 08:29:58.206268 10463 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca-key.pem org=jenkins.addons-631036 san=[127.0.0.1 192.168.39.24 addons-631036 localhost minikube]
I1025 08:29:58.586608 10463 provision.go:177] copyRemoteCerts
I1025 08:29:58.586665 10463 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1025 08:29:58.589851 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:58.590373 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:29:58.590404 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:58.590575 10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
I1025 08:29:58.678119 10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1025 08:29:58.711072 10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1025 08:29:58.743918 10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1025 08:29:58.776915 10463 provision.go:87] duration metric: took 578.472161ms to configureAuth
I1025 08:29:58.776950 10463 buildroot.go:189] setting minikube options for container-runtime
I1025 08:29:58.777162 10463 config.go:182] Loaded profile config "addons-631036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:29:58.780159 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:58.780592 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:29:58.780622 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:58.780791 10463 main.go:141] libmachine: Using SSH client type: native
I1025 08:29:58.780987 10463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1025 08:29:58.781007 10463 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1025 08:29:59.026043 10463 main.go:141] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1025 08:29:59.026078 10463 machine.go:96] duration metric: took 1.204314191s to provisionDockerMachine
I1025 08:29:59.026094 10463 client.go:171] duration metric: took 18.276629357s to LocalClient.Create
I1025 08:29:59.026110 10463 start.go:167] duration metric: took 18.276685143s to libmachine.API.Create "addons-631036"
I1025 08:29:59.026118 10463 start.go:293] postStartSetup for "addons-631036" (driver="kvm2")
I1025 08:29:59.026126 10463 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1025 08:29:59.026203 10463 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1025 08:29:59.029181 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:59.029588 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:29:59.029611 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:59.029855 10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
I1025 08:29:59.125200 10463 ssh_runner.go:195] Run: cat /etc/os-release
I1025 08:29:59.131432 10463 info.go:137] Remote host: Buildroot 2025.02
I1025 08:29:59.131458 10463 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5973/.minikube/addons for local assets ...
I1025 08:29:59.131538 10463 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5973/.minikube/files for local assets ...
I1025 08:29:59.131562 10463 start.go:296] duration metric: took 105.439276ms for postStartSetup
I1025 08:29:59.135073 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:59.135522 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:29:59.135545 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:59.135757 10463 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/config.json ...
I1025 08:29:59.135951 10463 start.go:128] duration metric: took 18.388149122s to createHost
I1025 08:29:59.138233 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:59.138610 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:29:59.138630 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:59.138811 10463 main.go:141] libmachine: Using SSH client type: native
I1025 08:29:59.139041 10463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1025 08:29:59.139061 10463 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I1025 08:29:59.259355 10463 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761380999.218833551
I1025 08:29:59.259379 10463 fix.go:216] guest clock: 1761380999.218833551
I1025 08:29:59.259386 10463 fix.go:229] Guest: 2025-10-25 08:29:59.218833551 +0000 UTC Remote: 2025-10-25 08:29:59.135961729 +0000 UTC m=+18.487636401 (delta=82.871822ms)
I1025 08:29:59.259403 10463 fix.go:200] guest clock delta is within tolerance: 82.871822ms
I1025 08:29:59.259408 10463 start.go:83] releasing machines lock for "addons-631036", held for 18.511673494s
I1025 08:29:59.262606 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:59.263332 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:29:59.263362 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:59.264136 10463 ssh_runner.go:195] Run: cat /version.json
I1025 08:29:59.264170 10463 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1025 08:29:59.267725 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:59.267731 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:59.268399 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:29:59.268465 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:59.268480 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:29:59.268520 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:29:59.268673 10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
I1025 08:29:59.268852 10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
I1025 08:29:59.375771 10463 ssh_runner.go:195] Run: systemctl --version
I1025 08:29:59.382592 10463 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1025 08:29:59.544605 10463 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1025 08:29:59.552574 10463 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1025 08:29:59.552645 10463 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1025 08:29:59.574371 10463 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1025 08:29:59.574399 10463 start.go:495] detecting cgroup driver to use...
I1025 08:29:59.574459 10463 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1025 08:29:59.593535 10463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1025 08:29:59.611360 10463 docker.go:218] disabling cri-docker service (if available) ...
I1025 08:29:59.611426 10463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1025 08:29:59.629423 10463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1025 08:29:59.648459 10463 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1025 08:29:59.801057 10463 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1025 08:30:00.019587 10463 docker.go:234] disabling docker service ...
I1025 08:30:00.019651 10463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1025 08:30:00.036703 10463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1025 08:30:00.053263 10463 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1025 08:30:00.214725 10463 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1025 08:30:00.364681 10463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1025 08:30:00.383391 10463 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1025 08:30:00.407451 10463 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1025 08:30:00.407518 10463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1025 08:30:00.420966 10463 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1025 08:30:00.421039 10463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1025 08:30:00.434556 10463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1025 08:30:00.448048 10463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1025 08:30:00.461223 10463 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1025 08:30:00.475421 10463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1025 08:30:00.488726 10463 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1025 08:30:00.511110 10463 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1025 08:30:00.524910 10463 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1025 08:30:00.536721 10463 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1025 08:30:00.536780 10463 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1025 08:30:00.558309 10463 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1025 08:30:00.570939 10463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1025 08:30:00.717621 10463 ssh_runner.go:195] Run: sudo systemctl restart crio
I1025 08:30:00.823698 10463 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
I1025 08:30:00.823805 10463 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1025 08:30:00.829358 10463 start.go:563] Will wait 60s for crictl version
I1025 08:30:00.829443 10463 ssh_runner.go:195] Run: which crictl
I1025 08:30:00.834045 10463 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1025 08:30:00.877407 10463 start.go:579] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1025 08:30:00.877552 10463 ssh_runner.go:195] Run: crio --version
I1025 08:30:00.908377 10463 ssh_runner.go:195] Run: crio --version
I1025 08:30:00.941918 10463 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
I1025 08:30:00.946319 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:00.946684 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:30:00.946705 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:00.946865 10463 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1025 08:30:00.951707 10463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1025 08:30:00.968351 10463 kubeadm.go:883] updating cluster {Name:addons-631036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-631036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1025 08:30:00.968463 10463 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1025 08:30:00.968508 10463 ssh_runner.go:195] Run: sudo crictl images --output json
I1025 08:30:01.005353 10463 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
I1025 08:30:01.005434 10463 ssh_runner.go:195] Run: which lz4
I1025 08:30:01.009951 10463 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1025 08:30:01.015169 10463 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1025 08:30:01.015225 10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
I1025 08:30:02.443791 10463 crio.go:462] duration metric: took 1.43389573s to copy over tarball
I1025 08:30:02.443863 10463 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1025 08:30:04.372032 10463 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.92814397s)
I1025 08:30:04.372060 10463 crio.go:469] duration metric: took 1.928239765s to extract the tarball
I1025 08:30:04.372079 10463 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1025 08:30:04.416414 10463 ssh_runner.go:195] Run: sudo crictl images --output json
I1025 08:30:04.466628 10463 crio.go:514] all images are preloaded for cri-o runtime.
I1025 08:30:04.466674 10463 cache_images.go:85] Images are preloaded, skipping loading
I1025 08:30:04.466700 10463 kubeadm.go:934] updating node { 192.168.39.24 8443 v1.34.1 crio true true} ...
I1025 08:30:04.466807 10463 kubeadm.go:946] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-631036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.24
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:addons-631036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1025 08:30:04.466893 10463 ssh_runner.go:195] Run: crio config
I1025 08:30:04.516979 10463 cni.go:84] Creating CNI manager for ""
I1025 08:30:04.517014 10463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1025 08:30:04.517049 10463 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1025 08:30:04.517077 10463 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.24 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-631036 NodeName:addons-631036 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1025 08:30:04.517230 10463 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.24
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-631036"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.24"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.24"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1025 08:30:04.517327 10463 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1025 08:30:04.531168 10463 binaries.go:44] Found k8s binaries, skipping transfer
I1025 08:30:04.531264 10463 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1025 08:30:04.544394 10463 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I1025 08:30:04.567612 10463 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1025 08:30:04.590674 10463 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
I1025 08:30:04.612980 10463 ssh_runner.go:195] Run: grep 192.168.39.24 control-plane.minikube.internal$ /etc/hosts
I1025 08:30:04.618090 10463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.24 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1025 08:30:04.635403 10463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1025 08:30:04.789182 10463 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1025 08:30:04.811273 10463 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036 for IP: 192.168.39.24
I1025 08:30:04.811300 10463 certs.go:195] generating shared ca certs ...
I1025 08:30:04.811316 10463 certs.go:227] acquiring lock for ca certs: {Name:mke8d6ba2f98d813f76972dbfee9daa2e84822df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 08:30:04.811491 10463 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5973/.minikube/ca.key
I1025 08:30:05.439097 10463 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5973/.minikube/ca.crt ...
I1025 08:30:05.439129 10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/ca.crt: {Name:mk52dd658a0757ce0a6c9d1937a34c5b33809a45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 08:30:05.439353 10463 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5973/.minikube/ca.key ...
I1025 08:30:05.439372 10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/ca.key: {Name:mk0a25859ffa4cab5b8f6ed9286aa875514390e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 08:30:05.439493 10463 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.key
I1025 08:30:05.794211 10463 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.crt ...
I1025 08:30:05.794246 10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.crt: {Name:mk4aced4c5d58ad8d817891f3164a9d5ecefafb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 08:30:05.805373 10463 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.key ...
I1025 08:30:05.805410 10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.key: {Name:mke338d3cdba9c659c3b1df69ce22afb83bc9a5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 08:30:05.805555 10463 certs.go:257] generating profile certs ...
I1025 08:30:05.805641 10463 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.key
I1025 08:30:05.805659 10463 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt with IP's: []
I1025 08:30:06.154111 10463 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt ...
I1025 08:30:06.154144 10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: {Name:mk5e914da6617d5db487eb5d64f7c4a06b0d240b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 08:30:06.154353 10463 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.key ...
I1025 08:30:06.154368 10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.key: {Name:mk02e1cd0d843764aa57671aa8a4f96ad3514f45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 08:30:06.154472 10463 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.key.88f6c32f
I1025 08:30:06.154501 10463 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.crt.88f6c32f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.24]
I1025 08:30:06.507164 10463 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.crt.88f6c32f ...
I1025 08:30:06.507196 10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.crt.88f6c32f: {Name:mk96fe0a39fd076cc1ea279f8e3c11c3f7d8b3f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 08:30:06.507429 10463 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.key.88f6c32f ...
I1025 08:30:06.507449 10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.key.88f6c32f: {Name:mk061a2652044a4cb0a34c624c008b6e699d6b2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 08:30:06.507554 10463 certs.go:382] copying /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.crt.88f6c32f -> /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.crt
I1025 08:30:06.507627 10463 certs.go:386] copying /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.key.88f6c32f -> /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.key
I1025 08:30:06.507674 10463 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/proxy-client.key
I1025 08:30:06.507691 10463 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/proxy-client.crt with IP's: []
I1025 08:30:06.545705 10463 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/proxy-client.crt ...
I1025 08:30:06.545743 10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/proxy-client.crt: {Name:mk147b81e8c7960c73ea5a0ac7ebe9763e43565b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 08:30:06.545963 10463 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/proxy-client.key ...
I1025 08:30:06.545983 10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/proxy-client.key: {Name:mkf10dc08877c1895350813fc7155a602105261f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 08:30:06.546201 10463 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca-key.pem (1679 bytes)
I1025 08:30:06.546261 10463 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem (1078 bytes)
I1025 08:30:06.546285 10463 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/cert.pem (1123 bytes)
I1025 08:30:06.546326 10463 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/key.pem (1679 bytes)
I1025 08:30:06.546907 10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1025 08:30:06.585153 10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1025 08:30:06.617185 10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1025 08:30:06.651261 10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
I1025 08:30:06.684039 10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1025 08:30:06.717347 10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1025 08:30:06.751188 10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1025 08:30:06.782843 10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1025 08:30:06.815805 10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1025 08:30:06.849340 10463 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1025 08:30:06.873749 10463 ssh_runner.go:195] Run: openssl version
I1025 08:30:06.880802 10463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1025 08:30:06.894071 10463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1025 08:30:06.899610 10463 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
I1025 08:30:06.899677 10463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1025 08:30:06.907576 10463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1025 08:30:06.922175 10463 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1025 08:30:06.927613 10463 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1025 08:30:06.927679 10463 kubeadm.go:400] StartCluster: {Name:addons-631036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-631036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1025 08:30:06.927757 10463 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1025 08:30:06.927825 10463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1025 08:30:06.969004 10463 cri.go:89] found id: ""
I1025 08:30:06.969094 10463 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1025 08:30:06.981215 10463 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1025 08:30:06.994112 10463 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1025 08:30:07.007901 10463 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1025 08:30:07.007929 10463 kubeadm.go:157] found existing configuration files:
I1025 08:30:07.008006 10463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1025 08:30:07.019502 10463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1025 08:30:07.019575 10463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1025 08:30:07.031854 10463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1025 08:30:07.043496 10463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1025 08:30:07.043563 10463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1025 08:30:07.055843 10463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1025 08:30:07.069028 10463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1025 08:30:07.069113 10463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1025 08:30:07.081969 10463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1025 08:30:07.093875 10463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1025 08:30:07.093942 10463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1025 08:30:07.106573 10463 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1025 08:30:07.163256 10463 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
I1025 08:30:07.163337 10463 kubeadm.go:318] [preflight] Running pre-flight checks
I1025 08:30:07.283107 10463 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
I1025 08:30:07.283324 10463 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1025 08:30:07.283542 10463 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1025 08:30:07.295180 10463 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1025 08:30:07.392098 10463 out.go:252] - Generating certificates and keys ...
I1025 08:30:07.392269 10463 kubeadm.go:318] [certs] Using existing ca certificate authority
I1025 08:30:07.392371 10463 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
I1025 08:30:08.035875 10463 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
I1025 08:30:08.143153 10463 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
I1025 08:30:08.372659 10463 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
I1025 08:30:08.546768 10463 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
I1025 08:30:08.694288 10463 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
I1025 08:30:08.694417 10463 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-631036 localhost] and IPs [192.168.39.24 127.0.0.1 ::1]
I1025 08:30:08.798842 10463 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
I1025 08:30:08.798981 10463 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-631036 localhost] and IPs [192.168.39.24 127.0.0.1 ::1]
I1025 08:30:08.962675 10463 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
I1025 08:30:09.241830 10463 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
I1025 08:30:09.301300 10463 kubeadm.go:318] [certs] Generating "sa" key and public key
I1025 08:30:09.301394 10463 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1025 08:30:09.709210 10463 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
I1025 08:30:09.775060 10463 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1025 08:30:10.049167 10463 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1025 08:30:10.397152 10463 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1025 08:30:10.715594 10463 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1025 08:30:10.716110 10463 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1025 08:30:10.718440 10463 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1025 08:30:10.720426 10463 out.go:252] - Booting up control plane ...
I1025 08:30:10.720550 10463 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1025 08:30:10.720658 10463 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1025 08:30:10.722464 10463 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1025 08:30:10.742580 10463 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1025 08:30:10.742759 10463 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1025 08:30:10.749658 10463 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1025 08:30:10.749889 10463 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1025 08:30:10.749955 10463 kubeadm.go:318] [kubelet-start] Starting the kubelet
I1025 08:30:10.916473 10463 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1025 08:30:10.916638 10463 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1025 08:30:11.425813 10463 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 507.941174ms
I1025 08:30:11.426127 10463 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1025 08:30:11.426341 10463 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.24:8443/livez
I1025 08:30:11.426468 10463 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1025 08:30:11.426584 10463 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1025 08:30:13.581523 10463 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.156715375s
I1025 08:30:15.404321 10463 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.981763602s
I1025 08:30:17.423779 10463 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.002618972s
I1025 08:30:17.446079 10463 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1025 08:30:17.466375 10463 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1025 08:30:17.478713 10463 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
I1025 08:30:17.479013 10463 kubeadm.go:318] [mark-control-plane] Marking the node addons-631036 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1025 08:30:17.496957 10463 kubeadm.go:318] [bootstrap-token] Using token: oukl03.1aed6xmxtahaalv2
I1025 08:30:17.498387 10463 out.go:252] - Configuring RBAC rules ...
I1025 08:30:17.498527 10463 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1025 08:30:17.507604 10463 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1025 08:30:17.516788 10463 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1025 08:30:17.520354 10463 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1025 08:30:17.523820 10463 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1025 08:30:17.531152 10463 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1025 08:30:17.835844 10463 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1025 08:30:18.294401 10463 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
I1025 08:30:18.830499 10463 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
I1025 08:30:18.831613 10463 kubeadm.go:318]
I1025 08:30:18.831685 10463 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
I1025 08:30:18.831693 10463 kubeadm.go:318]
I1025 08:30:18.831756 10463 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
I1025 08:30:18.831788 10463 kubeadm.go:318]
I1025 08:30:18.831839 10463 kubeadm.go:318] mkdir -p $HOME/.kube
I1025 08:30:18.831915 10463 kubeadm.go:318] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1025 08:30:18.831989 10463 kubeadm.go:318] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1025 08:30:18.832022 10463 kubeadm.go:318]
I1025 08:30:18.832104 10463 kubeadm.go:318] Alternatively, if you are the root user, you can run:
I1025 08:30:18.832114 10463 kubeadm.go:318]
I1025 08:30:18.832176 10463 kubeadm.go:318] export KUBECONFIG=/etc/kubernetes/admin.conf
I1025 08:30:18.832188 10463 kubeadm.go:318]
I1025 08:30:18.832298 10463 kubeadm.go:318] You should now deploy a pod network to the cluster.
I1025 08:30:18.832400 10463 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1025 08:30:18.832492 10463 kubeadm.go:318] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1025 08:30:18.832504 10463 kubeadm.go:318]
I1025 08:30:18.832634 10463 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
I1025 08:30:18.832748 10463 kubeadm.go:318] and service account keys on each node and then running the following as root:
I1025 08:30:18.832757 10463 kubeadm.go:318]
I1025 08:30:18.832872 10463 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token oukl03.1aed6xmxtahaalv2 \
I1025 08:30:18.833044 10463 kubeadm.go:318] --discovery-token-ca-cert-hash sha256:fe6caeb5ca9f886e925578a66a55439fd94175d5983e2e751a2d3d56b0fd904d \
I1025 08:30:18.833078 10463 kubeadm.go:318] --control-plane
I1025 08:30:18.833083 10463 kubeadm.go:318]
I1025 08:30:18.833208 10463 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
I1025 08:30:18.833218 10463 kubeadm.go:318]
I1025 08:30:18.833361 10463 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token oukl03.1aed6xmxtahaalv2 \
I1025 08:30:18.833516 10463 kubeadm.go:318] --discovery-token-ca-cert-hash sha256:fe6caeb5ca9f886e925578a66a55439fd94175d5983e2e751a2d3d56b0fd904d
I1025 08:30:18.835156 10463 kubeadm.go:318] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1025 08:30:18.835189 10463 cni.go:84] Creating CNI manager for ""
I1025 08:30:18.835201 10463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1025 08:30:18.838073 10463 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1025 08:30:18.839635 10463 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1025 08:30:18.858618 10463 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1025 08:30:18.883100 10463 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1025 08:30:18.883191 10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1025 08:30:18.883191 10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-631036 minikube.k8s.io/updated_at=2025_10_25T08_30_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373 minikube.k8s.io/name=addons-631036 minikube.k8s.io/primary=true
I1025 08:30:18.927072 10463 ops.go:34] apiserver oom_adj: -16
I1025 08:30:19.041058 10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1025 08:30:19.542037 10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1025 08:30:20.042005 10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1025 08:30:20.542232 10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1025 08:30:21.041403 10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1025 08:30:21.541907 10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1025 08:30:22.041493 10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1025 08:30:22.541525 10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1025 08:30:23.041806 10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1025 08:30:23.541156 10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1025 08:30:23.716964 10463 kubeadm.go:1113] duration metric: took 4.83385667s to wait for elevateKubeSystemPrivileges
I1025 08:30:23.717012 10463 kubeadm.go:402] duration metric: took 16.789327545s to StartCluster
I1025 08:30:23.717031 10463 settings.go:142] acquiring lock: {Name:mkceaa31f1735308eeec0f271d1ae2367ed96dc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 08:30:23.717175 10463 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21796-5973/kubeconfig
I1025 08:30:23.717858 10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/kubeconfig: {Name:mk7395a01001bce28a4f8d18a1c883ac67624078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 08:30:23.718127 10463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1025 08:30:23.718124 10463 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1025 08:30:23.718144 10463 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1025 08:30:23.718359 10463 addons.go:69] Setting yakd=true in profile "addons-631036"
I1025 08:30:23.718382 10463 addons.go:238] Setting addon yakd=true in "addons-631036"
I1025 08:30:23.718401 10463 config.go:182] Loaded profile config "addons-631036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:30:23.718418 10463 addons.go:69] Setting volcano=true in profile "addons-631036"
I1025 08:30:23.718715 10463 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-631036"
I1025 08:30:23.718774 10463 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-631036"
I1025 08:30:23.718781 10463 addons.go:238] Setting addon volcano=true in "addons-631036"
I1025 08:30:23.718819 10463 addons.go:69] Setting storage-provisioner=true in profile "addons-631036"
I1025 08:30:23.718833 10463 host.go:66] Checking if "addons-631036" exists ...
I1025 08:30:23.718853 10463 addons.go:238] Setting addon storage-provisioner=true in "addons-631036"
I1025 08:30:23.718879 10463 host.go:66] Checking if "addons-631036" exists ...
I1025 08:30:23.718974 10463 host.go:66] Checking if "addons-631036" exists ...
I1025 08:30:23.718997 10463 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-631036"
I1025 08:30:23.719041 10463 addons.go:69] Setting registry=true in profile "addons-631036"
I1025 08:30:23.719049 10463 addons.go:69] Setting ingress=true in profile "addons-631036"
I1025 08:30:23.719060 10463 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-631036"
I1025 08:30:23.719065 10463 addons.go:238] Setting addon ingress=true in "addons-631036"
I1025 08:30:23.719078 10463 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-631036"
I1025 08:30:23.719090 10463 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-631036"
I1025 08:30:23.719099 10463 host.go:66] Checking if "addons-631036" exists ...
I1025 08:30:23.719111 10463 host.go:66] Checking if "addons-631036" exists ...
I1025 08:30:23.719153 10463 host.go:66] Checking if "addons-631036" exists ...
I1025 08:30:23.719169 10463 addons.go:69] Setting inspektor-gadget=true in profile "addons-631036"
I1025 08:30:23.719183 10463 addons.go:238] Setting addon inspektor-gadget=true in "addons-631036"
I1025 08:30:23.719206 10463 host.go:66] Checking if "addons-631036" exists ...
I1025 08:30:23.720062 10463 addons.go:69] Setting volumesnapshots=true in profile "addons-631036"
I1025 08:30:23.720087 10463 addons.go:238] Setting addon volumesnapshots=true in "addons-631036"
I1025 08:30:23.720112 10463 host.go:66] Checking if "addons-631036" exists ...
I1025 08:30:23.720758 10463 addons.go:69] Setting metrics-server=true in profile "addons-631036"
I1025 08:30:23.720781 10463 addons.go:238] Setting addon metrics-server=true in "addons-631036"
I1025 08:30:23.720805 10463 host.go:66] Checking if "addons-631036" exists ...
I1025 08:30:23.721108 10463 addons.go:69] Setting ingress-dns=true in profile "addons-631036"
I1025 08:30:23.721134 10463 addons.go:238] Setting addon ingress-dns=true in "addons-631036"
I1025 08:30:23.721172 10463 host.go:66] Checking if "addons-631036" exists ...
I1025 08:30:23.721636 10463 addons.go:69] Setting registry-creds=true in profile "addons-631036"
I1025 08:30:23.721658 10463 addons.go:238] Setting addon registry-creds=true in "addons-631036"
I1025 08:30:23.719052 10463 addons.go:238] Setting addon registry=true in "addons-631036"
I1025 08:30:23.721679 10463 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-631036"
I1025 08:30:23.721690 10463 host.go:66] Checking if "addons-631036" exists ...
I1025 08:30:23.721702 10463 host.go:66] Checking if "addons-631036" exists ...
I1025 08:30:23.721741 10463 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-631036"
I1025 08:30:23.721770 10463 host.go:66] Checking if "addons-631036" exists ...
I1025 08:30:23.721891 10463 addons.go:69] Setting default-storageclass=true in profile "addons-631036"
I1025 08:30:23.721928 10463 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-631036"
I1025 08:30:23.722395 10463 addons.go:69] Setting gcp-auth=true in profile "addons-631036"
I1025 08:30:23.722421 10463 mustload.go:65] Loading cluster: addons-631036
I1025 08:30:23.722707 10463 addons.go:69] Setting cloud-spanner=true in profile "addons-631036"
I1025 08:30:23.722729 10463 addons.go:238] Setting addon cloud-spanner=true in "addons-631036"
I1025 08:30:23.722753 10463 host.go:66] Checking if "addons-631036" exists ...
I1025 08:30:23.722788 10463 out.go:179] * Verifying Kubernetes components...
I1025 08:30:23.722704 10463 config.go:182] Loaded profile config "addons-631036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:30:23.725104 10463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1025 08:30:23.727473 10463 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1025 08:30:23.727500 10463 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
I1025 08:30:23.727489 10463 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
I1025 08:30:23.727542 10463 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
W1025 08:30:23.729080 10463 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1025 08:30:23.729213 10463 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1025 08:30:23.729494 10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1025 08:30:23.729835 10463 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1025 08:30:23.729838 10463 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1025 08:30:23.729858 10463 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1025 08:30:23.730397 10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1025 08:30:23.729898 10463 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
I1025 08:30:23.730573 10463 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
I1025 08:30:23.730806 10463 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
I1025 08:30:23.731433 10463 host.go:66] Checking if "addons-631036" exists ...
I1025 08:30:23.731650 10463 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1025 08:30:23.731772 10463 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1025 08:30:23.731999 10463 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1025 08:30:23.732328 10463 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1025 08:30:23.732614 10463 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1025 08:30:23.733059 10463 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1025 08:30:23.732624 10463 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1025 08:30:23.732963 10463 addons.go:238] Setting addon default-storageclass=true in "addons-631036"
I1025 08:30:23.733295 10463 host.go:66] Checking if "addons-631036" exists ...
I1025 08:30:23.732965 10463 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-631036"
I1025 08:30:23.733390 10463 host.go:66] Checking if "addons-631036" exists ...
I1025 08:30:23.733469 10463 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1025 08:30:23.733479 10463 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1025 08:30:23.733488 10463 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1025 08:30:23.734985 10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1025 08:30:23.733601 10463 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1025 08:30:23.734602 10463 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1025 08:30:23.735130 10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1025 08:30:23.735434 10463 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
I1025 08:30:23.735471 10463 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
I1025 08:30:23.736267 10463 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
I1025 08:30:23.736286 10463 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1025 08:30:23.736329 10463 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1025 08:30:23.736658 10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1025 08:30:23.737174 10463 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1025 08:30:23.737269 10463 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
I1025 08:30:23.737548 10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1025 08:30:23.737311 10463 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1025 08:30:23.737655 10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1025 08:30:23.737980 10463 out.go:179] - Using image docker.io/registry:3.0.0
I1025 08:30:23.738168 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.738718 10463 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I1025 08:30:23.738736 10463 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1025 08:30:23.739655 10463 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
I1025 08:30:23.739672 10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1025 08:30:23.739675 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.740177 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:30:23.740209 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.740295 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.740749 10463 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1025 08:30:23.740767 10463 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1025 08:30:23.740925 10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
I1025 08:30:23.741467 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:30:23.741498 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.741811 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.742206 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:30:23.742260 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.742316 10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
I1025 08:30:23.743093 10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
I1025 08:30:23.743493 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:30:23.743521 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.744052 10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
I1025 08:30:23.744404 10463 out.go:179] - Using image docker.io/busybox:stable
I1025 08:30:23.744485 10463 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1025 08:30:23.745022 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.745884 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.745985 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:30:23.746015 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.746139 10463 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1025 08:30:23.746159 10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1025 08:30:23.746580 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.746620 10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
I1025 08:30:23.747132 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:30:23.747164 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.747490 10463 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1025 08:30:23.747498 10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
I1025 08:30:23.747697 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.748082 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:30:23.748158 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.748195 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.748666 10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
I1025 08:30:23.749139 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:30:23.749170 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.749377 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.749438 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:30:23.749458 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.749473 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.749525 10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
I1025 08:30:23.750365 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:30:23.750381 10463 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1025 08:30:23.750382 10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
I1025 08:30:23.750408 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.750393 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:30:23.750722 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.750394 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.750395 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.751005 10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
I1025 08:30:23.751205 10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
I1025 08:30:23.751262 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:30:23.751297 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.751513 10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
I1025 08:30:23.751709 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:30:23.751739 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.751922 10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
I1025 08:30:23.752724 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.753126 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:30:23.753148 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.753323 10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
I1025 08:30:23.753476 10463 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1025 08:30:23.755130 10463 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1025 08:30:23.756291 10463 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1025 08:30:23.756314 10463 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1025 08:30:23.759017 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.759409 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:30:23.759438 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:23.759618 10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
W1025 08:30:24.149670 10463 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:45356->192.168.39.24:22: read: connection reset by peer
I1025 08:30:24.149698 10463 retry.go:31] will retry after 126.652669ms: ssh: handshake failed: read tcp 192.168.39.1:45356->192.168.39.24:22: read: connection reset by peer
I1025 08:30:24.674967 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1025 08:30:24.750161 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1025 08:30:24.760436 10463 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1025 08:30:24.760456 10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1025 08:30:24.838632 10463 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
I1025 08:30:24.838652 10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1025 08:30:24.864773 10463 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
I1025 08:30:24.864793 10463 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1025 08:30:24.864949 10463 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1025 08:30:24.864976 10463 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1025 08:30:24.912700 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1025 08:30:24.917528 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1025 08:30:24.951561 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1025 08:30:24.972820 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1025 08:30:24.975250 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1025 08:30:25.001694 10463 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
I1025 08:30:25.001723 10463 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1025 08:30:25.046097 10463 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1025 08:30:25.046125 10463 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1025 08:30:25.050834 10463 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.332662616s)
I1025 08:30:25.050917 10463 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.325750231s)
I1025 08:30:25.051011 10463 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1025 08:30:25.051017 10463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1025 08:30:25.210679 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1025 08:30:25.221126 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1025 08:30:25.230205 10463 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1025 08:30:25.230263 10463 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1025 08:30:25.412545 10463 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1025 08:30:25.412584 10463 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1025 08:30:25.419332 10463 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1025 08:30:25.419359 10463 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1025 08:30:25.438132 10463 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
I1025 08:30:25.438157 10463 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1025 08:30:25.440644 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 08:30:25.480010 10463 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
I1025 08:30:25.480031 10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1025 08:30:25.658443 10463 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1025 08:30:25.658477 10463 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1025 08:30:25.689303 10463 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1025 08:30:25.689336 10463 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1025 08:30:25.752066 10463 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1025 08:30:25.752092 10463 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1025 08:30:25.760662 10463 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
I1025 08:30:25.760685 10463 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1025 08:30:25.795722 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1025 08:30:26.007912 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1025 08:30:26.019467 10463 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1025 08:30:26.019491 10463 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1025 08:30:26.083668 10463 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1025 08:30:26.083693 10463 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1025 08:30:26.091730 10463 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
I1025 08:30:26.091752 10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1025 08:30:26.376734 10463 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1025 08:30:26.376757 10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1025 08:30:26.414502 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1025 08:30:26.445597 10463 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1025 08:30:26.445631 10463 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1025 08:30:26.645098 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.970089507s)
I1025 08:30:26.757952 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1025 08:30:26.857543 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.107343906s)
I1025 08:30:26.857646 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.944915549s)
I1025 08:30:26.980342 10463 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1025 08:30:26.980372 10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1025 08:30:27.556579 10463 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1025 08:30:27.556601 10463 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1025 08:30:27.932089 10463 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1025 08:30:27.932114 10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1025 08:30:28.298464 10463 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1025 08:30:28.298496 10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1025 08:30:28.785072 10463 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1025 08:30:28.785104 10463 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1025 08:30:29.284463 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1025 08:30:30.342283 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.424712998s)
I1025 08:30:31.203782 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.252168193s)
I1025 08:30:31.203954 10463 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1025 08:30:31.207066 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:31.207633 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:30:31.207670 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:31.207922 10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
I1025 08:30:31.759938 10463 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1025 08:30:32.021815 10463 addons.go:238] Setting addon gcp-auth=true in "addons-631036"
I1025 08:30:32.021883 10463 host.go:66] Checking if "addons-631036" exists ...
I1025 08:30:32.024391 10463 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1025 08:30:32.027337 10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:32.027885 10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
I1025 08:30:32.027924 10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
I1025 08:30:32.028213 10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
I1025 08:30:33.169903 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.197031057s)
I1025 08:30:33.169944 10463 addons.go:479] Verifying addon ingress=true in "addons-631036"
I1025 08:30:33.169993 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.194708259s)
I1025 08:30:33.170124 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.959418667s)
I1025 08:30:33.170028 10463 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.118997612s)
I1025 08:30:33.170213 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.949054705s)
I1025 08:30:33.170062 10463 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.119024231s)
I1025 08:30:33.170268 10463 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1025 08:30:33.170337 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.729652016s)
I1025 08:30:33.170364 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.37461001s)
W1025 08:30:33.170365 10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget created
serviceaccount/gadget created
configmap/gadget created
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
role.rbac.authorization.k8s.io/gadget-role created
rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
daemonset.apps/gadget created
stderr:
Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:30:33.170381 10463 addons.go:479] Verifying addon registry=true in "addons-631036"
I1025 08:30:33.170390 10463 retry.go:31] will retry after 161.771319ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget created
serviceaccount/gadget created
configmap/gadget created
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
role.rbac.authorization.k8s.io/gadget-role created
rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
daemonset.apps/gadget created
stderr:
Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:30:33.170476 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.162532812s)
I1025 08:30:33.170490 10463 addons.go:479] Verifying addon metrics-server=true in "addons-631036"
I1025 08:30:33.170580 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.756038893s)
I1025 08:30:33.170993 10463 node_ready.go:35] waiting up to 6m0s for node "addons-631036" to be "Ready" ...
I1025 08:30:33.173412 10463 out.go:179] * Verifying ingress addon...
I1025 08:30:33.173434 10463 out.go:179] * Verifying registry addon...
I1025 08:30:33.173441 10463 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-631036 service yakd-dashboard -n yakd-dashboard
I1025 08:30:33.175592 10463 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1025 08:30:33.175757 10463 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1025 08:30:33.223701 10463 node_ready.go:49] node "addons-631036" is "Ready"
I1025 08:30:33.223746 10463 node_ready.go:38] duration metric: took 52.724214ms for node "addons-631036" to be "Ready" ...
I1025 08:30:33.223765 10463 api_server.go:52] waiting for apiserver process to appear ...
I1025 08:30:33.223826 10463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 08:30:33.241332 10463 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1025 08:30:33.241365 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:33.241332 10463 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1025 08:30:33.241397 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
W1025 08:30:33.311424 10463 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I1025 08:30:33.333215 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 08:30:33.479206 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.721212928s)
W1025 08:30:33.479276 10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1025 08:30:33.479303 10463 retry.go:31] will retry after 201.571306ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1025 08:30:33.681396 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1025 08:30:33.709556 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:33.710027 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:33.712260 10463 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-631036" context rescaled to 1 replicas
I1025 08:30:34.203629 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:34.205145 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:34.624502 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.339995599s)
I1025 08:30:34.624552 10463 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-631036"
I1025 08:30:34.624555 10463 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.600132054s)
I1025 08:30:34.624601 10463 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.400753651s)
I1025 08:30:34.624636 10463 api_server.go:72] duration metric: took 10.906411211s to wait for apiserver process to appear ...
I1025 08:30:34.624701 10463 api_server.go:88] waiting for apiserver healthz status ...
I1025 08:30:34.624725 10463 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
I1025 08:30:34.626800 10463 out.go:179] * Verifying csi-hostpath-driver addon...
I1025 08:30:34.626853 10463 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
I1025 08:30:34.628452 10463 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1025 08:30:34.629353 10463 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1025 08:30:34.629746 10463 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1025 08:30:34.629769 10463 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1025 08:30:34.693510 10463 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1025 08:30:34.693545 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:34.708218 10463 api_server.go:279] https://192.168.39.24:8443/healthz returned 200:
ok
I1025 08:30:34.712385 10463 api_server.go:141] control plane version: v1.34.1
I1025 08:30:34.712416 10463 api_server.go:131] duration metric: took 87.706392ms to wait for apiserver health ...
I1025 08:30:34.712428 10463 system_pods.go:43] waiting for kube-system pods to appear ...
I1025 08:30:34.760373 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:34.760492 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:34.762924 10463 system_pods.go:59] 20 kube-system pods found
I1025 08:30:34.762966 10463 system_pods.go:61] "amd-gpu-device-plugin-frvrc" [201f5833-8bf6-475d-82b1-c927a3c7317b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1025 08:30:34.762975 10463 system_pods.go:61] "coredns-66bc5c9577-8mtlq" [5d8ef08e-3e63-4391-b058-8567251dc2f6] Running
I1025 08:30:34.762983 10463 system_pods.go:61] "coredns-66bc5c9577-wk56k" [1147dfe5-42e8-493d-b71e-b18c2dccea1a] Running
I1025 08:30:34.763000 10463 system_pods.go:61] "csi-hostpath-attacher-0" [1300984c-bdb1-4a67-ad5f-38678737bd63] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1025 08:30:34.763019 10463 system_pods.go:61] "csi-hostpath-resizer-0" [cc67468a-08d6-4bfc-8f9f-034995939f82] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1025 08:30:34.763034 10463 system_pods.go:61] "csi-hostpathplugin-zf5nw" [263033f3-4c81-4830-bd0e-0c77d25821c6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1025 08:30:34.763047 10463 system_pods.go:61] "etcd-addons-631036" [5163e635-4efb-4129-86e9-b4cceeca0896] Running
I1025 08:30:34.763062 10463 system_pods.go:61] "kube-apiserver-addons-631036" [27881d61-62f0-46a2-b3c6-7b2dcb073b61] Running
I1025 08:30:34.763071 10463 system_pods.go:61] "kube-controller-manager-addons-631036" [6d8cf523-be28-41de-8226-906150d433e4] Running
I1025 08:30:34.763079 10463 system_pods.go:61] "kube-ingress-dns-minikube" [b8340127-56b4-4638-b7ca-1a5815a313cc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1025 08:30:34.763087 10463 system_pods.go:61] "kube-proxy-nzdhm" [d3cd3e35-b924-472f-9218-233cdce69396] Running
I1025 08:30:34.763093 10463 system_pods.go:61] "kube-scheduler-addons-631036" [f8cea14e-4cb3-4341-910c-a1fea712966f] Running
I1025 08:30:34.763105 10463 system_pods.go:61] "metrics-server-85b7d694d7-4b2tp" [060cbc46-1bf9-48ba-b6eb-9f0fe9e1a912] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1025 08:30:34.763116 10463 system_pods.go:61] "nvidia-device-plugin-daemonset-65m2r" [d049181d-68c1-439c-bfbb-61eff9e986fa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1025 08:30:34.763130 10463 system_pods.go:61] "registry-6b586f9694-h2dlk" [deafd51c-1def-42f4-bf1d-433def2f97c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1025 08:30:34.763148 10463 system_pods.go:61] "registry-creds-764b6fb674-kzcnh" [2d445e86-d667-49cf-a274-1872cf7d57a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1025 08:30:34.763159 10463 system_pods.go:61] "registry-proxy-lfzv8" [44090c69-a71c-43ba-9342-a65d7cdcbea7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1025 08:30:34.763171 10463 system_pods.go:61] "snapshot-controller-7d9fbc56b8-k8kjc" [9bbac9c3-9506-4a94-8825-12d563a4ec5a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1025 08:30:34.763188 10463 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rmtbf" [67657df2-ca6b-4f00-a043-b8fdf294e0b6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1025 08:30:34.763198 10463 system_pods.go:61] "storage-provisioner" [48ababc1-07e4-4d36-89b2-8a6c8d29de6c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1025 08:30:34.763206 10463 system_pods.go:74] duration metric: took 50.772134ms to wait for pod list to return data ...
I1025 08:30:34.763217 10463 default_sa.go:34] waiting for default service account to be created ...
I1025 08:30:34.773772 10463 default_sa.go:45] found service account: "default"
I1025 08:30:34.773809 10463 default_sa.go:55] duration metric: took 10.584761ms for default service account to be created ...
I1025 08:30:34.773826 10463 system_pods.go:116] waiting for k8s-apps to be running ...
I1025 08:30:34.786641 10463 system_pods.go:86] 20 kube-system pods found
I1025 08:30:34.786679 10463 system_pods.go:89] "amd-gpu-device-plugin-frvrc" [201f5833-8bf6-475d-82b1-c927a3c7317b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1025 08:30:34.786694 10463 system_pods.go:89] "coredns-66bc5c9577-8mtlq" [5d8ef08e-3e63-4391-b058-8567251dc2f6] Running
I1025 08:30:34.786704 10463 system_pods.go:89] "coredns-66bc5c9577-wk56k" [1147dfe5-42e8-493d-b71e-b18c2dccea1a] Running
I1025 08:30:34.786712 10463 system_pods.go:89] "csi-hostpath-attacher-0" [1300984c-bdb1-4a67-ad5f-38678737bd63] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1025 08:30:34.786723 10463 system_pods.go:89] "csi-hostpath-resizer-0" [cc67468a-08d6-4bfc-8f9f-034995939f82] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1025 08:30:34.786732 10463 system_pods.go:89] "csi-hostpathplugin-zf5nw" [263033f3-4c81-4830-bd0e-0c77d25821c6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1025 08:30:34.786737 10463 system_pods.go:89] "etcd-addons-631036" [5163e635-4efb-4129-86e9-b4cceeca0896] Running
I1025 08:30:34.786743 10463 system_pods.go:89] "kube-apiserver-addons-631036" [27881d61-62f0-46a2-b3c6-7b2dcb073b61] Running
I1025 08:30:34.786753 10463 system_pods.go:89] "kube-controller-manager-addons-631036" [6d8cf523-be28-41de-8226-906150d433e4] Running
I1025 08:30:34.786760 10463 system_pods.go:89] "kube-ingress-dns-minikube" [b8340127-56b4-4638-b7ca-1a5815a313cc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1025 08:30:34.786764 10463 system_pods.go:89] "kube-proxy-nzdhm" [d3cd3e35-b924-472f-9218-233cdce69396] Running
I1025 08:30:34.786767 10463 system_pods.go:89] "kube-scheduler-addons-631036" [f8cea14e-4cb3-4341-910c-a1fea712966f] Running
I1025 08:30:34.786774 10463 system_pods.go:89] "metrics-server-85b7d694d7-4b2tp" [060cbc46-1bf9-48ba-b6eb-9f0fe9e1a912] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1025 08:30:34.786782 10463 system_pods.go:89] "nvidia-device-plugin-daemonset-65m2r" [d049181d-68c1-439c-bfbb-61eff9e986fa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1025 08:30:34.786790 10463 system_pods.go:89] "registry-6b586f9694-h2dlk" [deafd51c-1def-42f4-bf1d-433def2f97c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1025 08:30:34.786802 10463 system_pods.go:89] "registry-creds-764b6fb674-kzcnh" [2d445e86-d667-49cf-a274-1872cf7d57a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1025 08:30:34.786809 10463 system_pods.go:89] "registry-proxy-lfzv8" [44090c69-a71c-43ba-9342-a65d7cdcbea7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1025 08:30:34.786827 10463 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k8kjc" [9bbac9c3-9506-4a94-8825-12d563a4ec5a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1025 08:30:34.786837 10463 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rmtbf" [67657df2-ca6b-4f00-a043-b8fdf294e0b6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1025 08:30:34.786847 10463 system_pods.go:89] "storage-provisioner" [48ababc1-07e4-4d36-89b2-8a6c8d29de6c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1025 08:30:34.786855 10463 system_pods.go:126] duration metric: took 13.023012ms to wait for k8s-apps to be running ...
I1025 08:30:34.786865 10463 system_svc.go:44] waiting for kubelet service to be running ....
I1025 08:30:34.786917 10463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1025 08:30:34.891269 10463 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1025 08:30:34.891335 10463 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1025 08:30:35.098085 10463 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1025 08:30:35.098117 10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1025 08:30:35.142761 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:35.173544 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1025 08:30:35.242943 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:35.245068 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:35.636347 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:35.680628 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:35.683787 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:36.139969 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:36.240967 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:36.241066 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:36.636285 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:36.683231 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:36.684933 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:36.944894 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.263442366s)
I1025 08:30:36.944948 10463 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.158003257s)
I1025 08:30:36.944978 10463 system_svc.go:56] duration metric: took 2.158109744s WaitForService to wait for kubelet
I1025 08:30:36.944990 10463 kubeadm.go:586] duration metric: took 13.226764568s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1025 08:30:36.945022 10463 node_conditions.go:102] verifying NodePressure condition ...
I1025 08:30:36.946729 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.613456018s)
W1025 08:30:36.946767 10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:30:36.946785 10463 retry.go:31] will retry after 411.387705ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:30:36.975838 10463 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1025 08:30:36.975870 10463 node_conditions.go:123] node cpu capacity is 2
I1025 08:30:36.975880 10463 node_conditions.go:105] duration metric: took 30.851109ms to run NodePressure ...
I1025 08:30:36.975891 10463 start.go:241] waiting for startup goroutines ...
I1025 08:30:37.172191 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.998592015s)
I1025 08:30:37.173586 10463 addons.go:479] Verifying addon gcp-auth=true in "addons-631036"
I1025 08:30:37.175723 10463 out.go:179] * Verifying gcp-auth addon...
I1025 08:30:37.178154 10463 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1025 08:30:37.196291 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:37.217903 10463 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1025 08:30:37.217926 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:37.217920 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:37.221209 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:37.358403 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 08:30:37.637520 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:37.687197 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:37.688584 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:37.688890 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:38.138159 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:38.183576 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:38.183711 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:38.186756 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:38.637032 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:38.687008 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:38.687127 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:38.687207 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:38.811716 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.453272214s)
W1025 08:30:38.811752 10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:30:38.811774 10463 retry.go:31] will retry after 375.905371ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:30:39.136332 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:39.183472 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:39.185882 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:39.186807 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:39.187882 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 08:30:39.637596 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:39.684994 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:39.685953 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:39.687608 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:40.137909 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:40.183957 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:40.184301 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:40.187536 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:40.339312 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.151397345s)
W1025 08:30:40.339363 10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:30:40.339428 10463 retry.go:31] will retry after 1.094514967s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:30:40.637012 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:40.683548 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:40.685179 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:40.685686 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:41.139864 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:41.239638 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:41.239740 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:41.239866 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:41.434186 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 08:30:41.634864 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:41.681133 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:41.681649 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:41.685408 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:42.133729 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
W1025 08:30:42.148753 10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:30:42.148790 10463 retry.go:31] will retry after 1.882995844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:30:42.179013 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:42.179160 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:42.181755 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:42.636100 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:42.686740 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:42.689861 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:42.691770 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:43.136138 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:43.182385 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:43.185442 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:43.185643 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:43.634724 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:43.735107 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:43.735190 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:43.735540 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:44.032984 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 08:30:44.132658 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:44.185266 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:44.185459 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:44.187053 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:44.634608 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:44.682046 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:44.683625 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:44.683832 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
W1025 08:30:44.789499 10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:30:44.789541 10463 retry.go:31] will retry after 2.403366064s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:30:45.134322 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:45.180530 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:45.185229 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:45.185609 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:45.648496 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:45.682776 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:45.683293 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:45.683546 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:46.135279 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:46.182197 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:46.183390 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:46.183814 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:46.636345 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:46.685350 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:46.692876 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:46.693638 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:47.136969 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:47.189056 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:47.189272 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:47.190364 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:47.193549 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 08:30:47.637198 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:47.681509 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:47.682388 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:47.685052 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:48.136799 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:48.186935 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:48.187006 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:48.188132 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:48.445274 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.251681204s)
W1025 08:30:48.445325 10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:30:48.445345 10463 retry.go:31] will retry after 3.592234871s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:30:48.637197 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:48.686693 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:48.686717 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:48.687541 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:49.186095 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:49.186272 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:49.186391 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:49.187314 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:49.634321 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:49.681835 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:49.681848 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:49.683046 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:50.133612 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:50.179883 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:50.180546 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:50.181810 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:50.635269 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:50.736092 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:50.736148 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:50.736732 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:51.134263 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:51.180443 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:51.180599 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:51.181984 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:51.636034 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:51.683186 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:51.685881 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:51.686704 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:52.038297 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 08:30:52.136871 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:52.188752 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:52.189282 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:52.189376 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:52.638321 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:52.684397 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:52.684658 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:52.685762 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:53.137270 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:53.182823 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:53.182883 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:53.191182 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:53.240863 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.202521921s)
W1025 08:30:53.240912 10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:30:53.240936 10463 retry.go:31] will retry after 3.219637926s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:30:53.634079 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:53.683089 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:53.683119 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:53.687630 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:54.134421 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:54.182994 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:54.183019 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:54.183094 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:54.635899 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:54.679537 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:54.681892 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:54.684397 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:55.134787 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:55.180999 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:55.182415 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:55.182615 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:55.643462 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:55.687677 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:55.689286 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:55.692255 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:56.134140 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:56.377339 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:56.377795 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:56.378135 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:56.461394 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 08:30:56.634936 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:56.682968 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:56.687041 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:56.687742 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:57.135075 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:57.193305 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:57.198194 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:57.199028 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:57.637044 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:57.682101 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:57.682617 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:57.684136 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:57.713347 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.251894952s)
W1025 08:30:57.713397 10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:30:57.713421 10463 retry.go:31] will retry after 6.487569446s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:30:58.134368 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:58.183040 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:58.183308 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:58.183637 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:58.637117 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:58.693420 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:58.693470 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:58.693617 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:59.135677 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:59.237124 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:59.237439 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:30:59.237590 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:59.638457 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:30:59.697903 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:30:59.698150 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:30:59.698160 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:00.134020 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:00.183585 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:00.185505 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:00.186004 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:00.638413 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:00.680932 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:00.682169 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:00.683065 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:01.134584 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:01.181017 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:01.182978 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:01.183524 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:01.634281 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:01.683221 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:01.683285 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:01.687893 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:02.134593 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:02.184343 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:02.185136 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:02.185230 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:02.634510 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:02.736270 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:02.736374 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:02.736378 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:03.135920 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:03.180155 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:03.181790 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:03.183902 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:03.633888 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:03.679433 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:03.680276 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:03.681622 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:04.136944 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:04.179832 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:04.180070 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:04.182843 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:04.202085 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 08:31:04.634796 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:04.685464 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:04.685467 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:04.687400 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:05.133652 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:05.180892 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:05.181952 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:05.185074 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:05.253092 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.050967222s)
W1025 08:31:05.253138 10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:31:05.253158 10463 retry.go:31] will retry after 9.611661127s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:31:05.635850 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:05.682702 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:05.684623 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:05.684951 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:06.135808 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:06.181303 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:06.182800 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:06.191605 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:06.632905 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:06.695445 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:06.695733 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:06.695866 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:07.135158 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:07.187891 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:07.187917 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:07.188426 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:07.636927 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:07.679698 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:07.679703 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:07.680864 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:08.137966 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:08.179379 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:08.180468 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:08.181432 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:08.633519 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:08.681183 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:08.681405 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:08.682130 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:09.134806 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:09.180701 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:09.181033 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:09.183852 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:09.638834 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:09.681175 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:09.684743 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:09.686200 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:10.136650 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:10.183901 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:10.184134 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:10.188470 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:10.637219 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:10.683229 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:10.683258 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:10.683735 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:11.137582 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:11.190192 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:11.190460 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:11.192032 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:11.634859 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:11.679822 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:11.681714 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:11.683266 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:12.135497 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:12.181719 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:12.183887 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:12.184413 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:12.633656 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:12.679872 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:12.680183 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:12.682554 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:13.135401 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:13.236962 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:13.237058 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:13.237146 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:13.634100 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:13.679696 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:13.680994 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:13.682157 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:14.135139 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:14.178855 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:14.182979 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:14.183223 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:14.633665 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:14.679292 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:14.680645 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:14.681994 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:14.865427 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 08:31:15.136562 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:15.179817 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:15.185511 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:15.186155 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:15.634784 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:15.681450 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:15.685551 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:15.688108 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
W1025 08:31:15.734177 10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:31:15.734215 10463 retry.go:31] will retry after 9.851270621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:31:16.136452 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:16.184796 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:16.186441 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:16.186492 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:16.634126 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:16.683408 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:16.685349 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:16.686033 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:17.135130 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:17.183523 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:17.185429 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:17.185721 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:17.634923 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:17.680675 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:17.682904 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:17.683308 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:18.134584 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:18.181740 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:18.182904 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:18.184812 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:18.634136 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:18.679462 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:18.682234 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:18.682414 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:19.140660 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:19.180260 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:19.181589 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:19.182997 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:19.633603 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:19.735476 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:19.736807 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:19.737770 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:20.136917 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:20.182628 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:20.185476 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:20.186940 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:20.638988 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:20.679820 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:20.681378 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:20.681934 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:21.134229 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:21.179118 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 08:31:21.179887 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:21.181613 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:21.634922 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:21.682855 10463 kapi.go:107] duration metric: took 48.507262271s to wait for kubernetes.io/minikube-addons=registry ...
I1025 08:31:21.683090 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:21.686412 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:22.132844 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:22.180651 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:22.182217 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:22.633920 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:22.682842 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:22.684978 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:23.135684 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:23.182499 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:23.186255 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:23.633760 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:23.681056 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:23.682411 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:24.133534 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:24.184702 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:24.189084 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:24.638984 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:24.680976 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:24.686411 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:25.134228 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:25.179432 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:25.182625 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:25.586029 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 08:31:25.635868 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:25.683125 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:25.683222 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:26.134662 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:26.182502 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:26.187949 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:26.634871 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:26.682525 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:26.683433 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:26.995723 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.409659939s)
W1025 08:31:26.995762 10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:31:26.995779 10463 retry.go:31] will retry after 23.000661637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:31:27.134930 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:27.181899 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:27.183956 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:27.634742 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:27.680485 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:27.681996 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:28.135994 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:28.179604 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:28.181339 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:28.634358 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:28.681260 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:28.681994 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:29.140682 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:29.243846 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:29.244518 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:29.634492 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:29.685362 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:29.685770 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:30.134120 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:30.181998 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:30.192396 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:30.640176 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:30.680750 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:30.682558 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:31.136095 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:31.184588 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:31.184682 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:31.638578 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:31.688389 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:31.688981 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:32.134700 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:32.180059 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:32.182494 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:32.638852 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:32.740178 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:32.740575 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:33.134145 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:33.182128 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:33.183224 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:33.633689 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:33.688207 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:33.691127 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:34.135309 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:34.182276 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:34.186107 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:34.633296 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:34.680182 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:34.684122 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:35.139041 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:35.184261 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:35.186305 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:35.633203 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:35.733785 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:35.734010 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:36.133727 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:36.190841 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:36.190880 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:36.635277 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:36.680343 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:36.681791 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:37.134457 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:37.181683 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:37.184994 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:37.633368 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:37.682453 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:37.684359 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:38.137051 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:38.180784 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:38.187823 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:38.634232 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:38.684379 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:38.685231 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:39.134669 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:39.186644 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:39.187632 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:39.634016 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:39.679958 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:39.683785 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:40.136294 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:40.183011 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:40.186375 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:40.636720 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:40.684141 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:40.684295 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:41.139213 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 08:31:41.183053 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:41.191216 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:41.644092 10463 kapi.go:107] duration metric: took 1m7.014660887s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1025 08:31:41.680422 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:41.682918 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:42.181552 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:42.187551 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:42.680050 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:42.685527 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:43.185397 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:43.190024 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:43.682424 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:43.684493 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:44.184250 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:44.184303 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:44.683715 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:44.684803 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:45.303304 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:45.303560 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:45.682349 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:45.682898 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:46.181955 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:46.183118 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:46.683782 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:46.685045 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:47.189750 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:47.190531 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:47.680187 10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 08:31:47.682612 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:48.180159 10463 kapi.go:107] duration metric: took 1m15.004397568s to wait for app.kubernetes.io/name=ingress-nginx ...
I1025 08:31:48.183123 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:48.699126 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:49.181859 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:49.683492 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:49.996881 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 08:31:50.182514 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:50.683726 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:51.103169 10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.106243287s)
W1025 08:31:51.103207 10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:31:51.103247 10463 retry.go:31] will retry after 18.652136107s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 08:31:51.183569 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:51.682553 10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 08:31:52.184187 10463 kapi.go:107] duration metric: took 1m15.006032649s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1025 08:31:52.186088 10463 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-631036 cluster.
I1025 08:31:52.187578 10463 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1025 08:31:52.189012 10463 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1025 08:32:09.756474 10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
W1025 08:32:10.482803 10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
W1025 08:32:10.482904 10463 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
]
I1025 08:32:10.485391 10463 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, registry-creds, ingress-dns, storage-provisioner, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I1025 08:32:10.487232 10463 addons.go:514] duration metric: took 1m46.769080493s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin registry-creds ingress-dns storage-provisioner cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I1025 08:32:10.487301 10463 start.go:246] waiting for cluster config update ...
I1025 08:32:10.487323 10463 start.go:255] writing updated cluster config ...
I1025 08:32:10.487604 10463 ssh_runner.go:195] Run: rm -f paused
I1025 08:32:10.493963 10463 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1025 08:32:10.498263 10463 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wk56k" in "kube-system" namespace to be "Ready" or be gone ...
I1025 08:32:10.504470 10463 pod_ready.go:94] pod "coredns-66bc5c9577-wk56k" is "Ready"
I1025 08:32:10.504495 10463 pod_ready.go:86] duration metric: took 6.173199ms for pod "coredns-66bc5c9577-wk56k" in "kube-system" namespace to be "Ready" or be gone ...
I1025 08:32:10.507234 10463 pod_ready.go:83] waiting for pod "etcd-addons-631036" in "kube-system" namespace to be "Ready" or be gone ...
I1025 08:32:10.513050 10463 pod_ready.go:94] pod "etcd-addons-631036" is "Ready"
I1025 08:32:10.513086 10463 pod_ready.go:86] duration metric: took 5.808461ms for pod "etcd-addons-631036" in "kube-system" namespace to be "Ready" or be gone ...
I1025 08:32:10.515221 10463 pod_ready.go:83] waiting for pod "kube-apiserver-addons-631036" in "kube-system" namespace to be "Ready" or be gone ...
I1025 08:32:10.520928 10463 pod_ready.go:94] pod "kube-apiserver-addons-631036" is "Ready"
I1025 08:32:10.520964 10463 pod_ready.go:86] duration metric: took 5.702304ms for pod "kube-apiserver-addons-631036" in "kube-system" namespace to be "Ready" or be gone ...
I1025 08:32:10.525562 10463 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-631036" in "kube-system" namespace to be "Ready" or be gone ...
I1025 08:32:10.898801 10463 pod_ready.go:94] pod "kube-controller-manager-addons-631036" is "Ready"
I1025 08:32:10.898828 10463 pod_ready.go:86] duration metric: took 373.239167ms for pod "kube-controller-manager-addons-631036" in "kube-system" namespace to be "Ready" or be gone ...
I1025 08:32:11.099768 10463 pod_ready.go:83] waiting for pod "kube-proxy-nzdhm" in "kube-system" namespace to be "Ready" or be gone ...
I1025 08:32:11.498928 10463 pod_ready.go:94] pod "kube-proxy-nzdhm" is "Ready"
I1025 08:32:11.498953 10463 pod_ready.go:86] duration metric: took 399.159654ms for pod "kube-proxy-nzdhm" in "kube-system" namespace to be "Ready" or be gone ...
I1025 08:32:11.699075 10463 pod_ready.go:83] waiting for pod "kube-scheduler-addons-631036" in "kube-system" namespace to be "Ready" or be gone ...
I1025 08:32:12.097997 10463 pod_ready.go:94] pod "kube-scheduler-addons-631036" is "Ready"
I1025 08:32:12.098024 10463 pod_ready.go:86] duration metric: took 398.907605ms for pod "kube-scheduler-addons-631036" in "kube-system" namespace to be "Ready" or be gone ...
I1025 08:32:12.098054 10463 pod_ready.go:40] duration metric: took 1.604055392s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1025 08:32:12.142324 10463 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
I1025 08:32:12.144374 10463 out.go:179] * Done! kubectl is now configured to use "addons-631036" cluster and "default" namespace by default
==> CRI-O <==
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.857620131Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c485411-de62-4ec6-9b92-5ea9c668d8bd name=/runtime.v1.RuntimeService/ListContainers
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.858913189Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba649ef510c320f775909d4460e56a36b46f3dd31e59195bf71be7cff2f62a8b,PodSandboxId:c52e3bd825ede126abab9b4d6468adc65b010ad5689b8128806a26f2a6e31914,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761381180140542495,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2c09d0ba-4bcf-41ee-a6df-0ac2dfc801a8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f2511eac37cc062cd57134739c7258c0d93ee71f123a7973659c9bdcbb2efb,PodSandboxId:05b61984d2b069f30739e6da42a76fe9219b7c33cf6487494aacbca1e8e271b2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761381137241231973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f2f6f1d-47e1-4920-87aa-ea653b62155e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668d1cf35f06cbd179f51d51212cb1bea413f03a7041ae3de44d0fa8ee001d0e,PodSandboxId:a441975f9c54e4ab33d49dc47071c868f1c084d831eb92a69c10061efd52104f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761381107092340480,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-mfkds,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50e82f06-39c8-4f72-a0a5-6b4703790748,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:e3d9d99a48fa27dd71568ac5165f17fac6ae24ac5b9895d27349cc8811fbb173,PodSandboxId:ed7501f8db956f8ab57934a1b3de2400f821fc9352275b3ad2c886c61fccb16d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761381088955281135,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rmrl2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bb255d16-2e85-41e8-9ed9-a35ce6b6acc9,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66b7d625c06ccc43ff637a53c6aef306e50d0948f7c40895b1b8476ffcfa1535,PodSandboxId:edaf6cd0334deabc8ea039f6c3ad31e25477396185fca88c4969c154f9a15a3c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761381088853541935,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-29xlb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50c6c028-9478-4d0e-b417-21f918310c81,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c32668b06a8dda97480854551a3f509075d440bfcf9c2837afc7a9eeabe0c9,PodSandboxId:bb605a2e21acae12f5471fa897aa6f3b0927b5d2cda229731b0de557b2af9d3f,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761381073172437309,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-gg64c,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 274413d3-cf62-4e8a-a462-c34623a92df7,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d454023a5e021e53842e04e29085483d3903d0cd3904d318a4795e26495578ae,PodSandboxId:b879b682b9e49d6c4d61e3a94fe3a99f8c192e266af6518020571712be146631,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761381060198237535,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8340127-56b4-4638-b7ca-1a5815a313cc,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721e2f83faa260a0aa770486fb488a646a740f18306894e3c98ad4cfab67ff2a,PodSandboxId:c1d7a85a1a0663dccb16e2d1c8c4023f1d1d232f349461d
c32f7d5326db133db,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761381043605417893,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-frvrc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201f5833-8bf6-475d-82b1-c927a3c7317b,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3949e1e589d6de8f336d9514c13a17e20b8e77323cd06ebc8c00867db7c45eb4,PodSandboxId:1672536
cad4dd9773bcf09f09b181a78fbd046231903bdc49d51247632bf214d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761381035579365038,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ababc1-07e4-4d36-89b2-8a6c8d29de6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e76d1b0e30682069f4cd18ed6791c5929f8b1dac8174f741c4d364a982c223e,PodSandboxId:ac9360fccaa24d400f5
b9b3a72030c724d9a225b0b1e521fc0eff7041ca03d52,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761381024888422175,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wk56k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1147dfe5-42e8-493d-b71e-b18c2dccea1a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1290b3d741f76073327b7413370692cb5c197f53e359b26187abcb0025aeffbb,PodSandboxId:3309bf2556203c8c0193de8cdcb28ef8c8dfd5a80032b5a824cb6bf4e91fdde3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761381023562519757,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nzdhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3cd3e35-b924-472f-9218-233cdce69396,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4570839da44e8b3a2fa055fe684460608e271192fd6c44ac946666456795adcc,PodSandboxId:498a148b1fa089187f01fbd98131c676133adc17c66049a4afb37ef4b8e72b79,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761381012442655834,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16e8a53d09a7fbb3a5e534717096a964,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"ho
stPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9cf074fa9b3410489789b50b5dde599ca5b43a9ac50ecb4c836d80d3338c955,PodSandboxId:44ec30afdfe0e08068898f9d8da354d73493db2d38636ad705a4c7ea0ebe7f85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761381012465811659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 690e6a5b2418f0ca9b6f3c0b414ff231,},Annotations:map[string]str
ing{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:577f1fc6b5aa6028e9ec35cacc97e7a58a56550d7b5e0161bce35c86154ebe5d,PodSandboxId:dfa1994a628a0947a42108e7de8545ea85ae551aaf7fd5d96bdd8fac5fe92db9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761381012407959375,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons
-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe261af37f4a81df64519dd9c14a22d0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:012875c13f0610cc9ba2625b34d3309c83847bdd8a40fda7f3741223f8077fe4,PodSandboxId:18ce3d8a538afdbfe1a0e6410d96f13b142763fa6ff6f6dfb71d21d8f2f0d8fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761381012384741295,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732e721612489f67f85d112768938e2f,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c485411-de62-4ec6-9b92-5ea9c668d8bd name=/runtime.v1.RuntimeService/ListContainers
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.861723811Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 621636ff-a5a1-4705-859c-3adbd54cbb54,},},}" file="otel-collector/interceptors.go:62" id=9d220979-08c6-4db7-8a92-083611fd26f9 name=/runtime.v1.RuntimeService/ListPodSandbox
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.861922986Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6e39dcdd38eabd3fef38735b36482eff8731fe2c886ef1dc2fda1ce7b638ab3c,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-m9rs7,Uid:621636ff-a5a1-4705-859c-3adbd54cbb54,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761381322984937941,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-m9rs7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 621636ff-a5a1-4705-859c-3adbd54cbb54,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-25T08:35:22.660871119Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=9d220979-08c6-4db7-8a92-083611fd26f9 name=/runtime.v1.RuntimeService/ListPodSandbox
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.862722165Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:6e39dcdd38eabd3fef38735b36482eff8731fe2c886ef1dc2fda1ce7b638ab3c,Verbose:false,}" file="otel-collector/interceptors.go:62" id=319f1bfd-a7ae-4860-81e2-17811ecec85d name=/runtime.v1.RuntimeService/PodSandboxStatus
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.863440870Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:6e39dcdd38eabd3fef38735b36482eff8731fe2c886ef1dc2fda1ce7b638ab3c,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-m9rs7,Uid:621636ff-a5a1-4705-859c-3adbd54cbb54,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761381322984937941,Network:&PodSandboxNetworkStatus{Ip:10.244.0.33,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:&UserNamespace{Mode:NODE,Uids:[]*IDMapping{},Gids:[]*IDMapping{},},},},},Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-m9rs7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 621636ff-a5a1-4705-859c-3adbd54cbb54,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen:
2025-10-25T08:35:22.660871119Z,kubernetes.io/config.source: api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=319f1bfd-a7ae-4860-81e2-17811ecec85d name=/runtime.v1.RuntimeService/PodSandboxStatus
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.867338405Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 621636ff-a5a1-4705-859c-3adbd54cbb54,},},}" file="otel-collector/interceptors.go:62" id=ed16043e-e5c2-4252-a570-410c61708034 name=/runtime.v1.RuntimeService/ListContainers
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.867440902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed16043e-e5c2-4252-a570-410c61708034 name=/runtime.v1.RuntimeService/ListContainers
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.867491365Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ed16043e-e5c2-4252-a570-410c61708034 name=/runtime.v1.RuntimeService/ListContainers
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.886911774Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.888260947Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.901619388Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=324d4d13-655f-439c-b083-8c4455ad7ba1 name=/runtime.v1.RuntimeService/Version
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.901792066Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=324d4d13-655f-439c-b083-8c4455ad7ba1 name=/runtime.v1.RuntimeService/Version
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.903296472Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e0ab21ea-5bd2-479b-8cbd-c86562a4f9f3 name=/runtime.v1.ImageService/ImageFsInfo
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.904586925Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761381323904559787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588896,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0ab21ea-5bd2-479b-8cbd-c86562a4f9f3 name=/runtime.v1.ImageService/ImageFsInfo
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.905418915Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=daa0bc1e-e644-44f1-9b06-3a7c9137a586 name=/runtime.v1.RuntimeService/ListContainers
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.905715317Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=daa0bc1e-e644-44f1-9b06-3a7c9137a586 name=/runtime.v1.RuntimeService/ListContainers
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.906340257Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba649ef510c320f775909d4460e56a36b46f3dd31e59195bf71be7cff2f62a8b,PodSandboxId:c52e3bd825ede126abab9b4d6468adc65b010ad5689b8128806a26f2a6e31914,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761381180140542495,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2c09d0ba-4bcf-41ee-a6df-0ac2dfc801a8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f2511eac37cc062cd57134739c7258c0d93ee71f123a7973659c9bdcbb2efb,PodSandboxId:05b61984d2b069f30739e6da42a76fe9219b7c33cf6487494aacbca1e8e271b2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761381137241231973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f2f6f1d-47e1-4920-87aa-ea653b62155e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668d1cf35f06cbd179f51d51212cb1bea413f03a7041ae3de44d0fa8ee001d0e,PodSandboxId:a441975f9c54e4ab33d49dc47071c868f1c084d831eb92a69c10061efd52104f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761381107092340480,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-mfkds,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50e82f06-39c8-4f72-a0a5-6b4703790748,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:e3d9d99a48fa27dd71568ac5165f17fac6ae24ac5b9895d27349cc8811fbb173,PodSandboxId:ed7501f8db956f8ab57934a1b3de2400f821fc9352275b3ad2c886c61fccb16d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761381088955281135,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rmrl2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bb255d16-2e85-41e8-9ed9-a35ce6b6acc9,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66b7d625c06ccc43ff637a53c6aef306e50d0948f7c40895b1b8476ffcfa1535,PodSandboxId:edaf6cd0334deabc8ea039f6c3ad31e25477396185fca88c4969c154f9a15a3c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761381088853541935,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-29xlb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50c6c028-9478-4d0e-b417-21f918310c81,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c32668b06a8dda97480854551a3f509075d440bfcf9c2837afc7a9eeabe0c9,PodSandboxId:bb605a2e21acae12f5471fa897aa6f3b0927b5d2cda229731b0de557b2af9d3f,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761381073172437309,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-gg64c,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 274413d3-cf62-4e8a-a462-c34623a92df7,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d454023a5e021e53842e04e29085483d3903d0cd3904d318a4795e26495578ae,PodSandboxId:b879b682b9e49d6c4d61e3a94fe3a99f8c192e266af6518020571712be146631,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761381060198237535,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8340127-56b4-4638-b7ca-1a5815a313cc,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721e2f83faa260a0aa770486fb488a646a740f18306894e3c98ad4cfab67ff2a,PodSandboxId:c1d7a85a1a0663dccb16e2d1c8c4023f1d1d232f349461d
c32f7d5326db133db,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761381043605417893,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-frvrc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201f5833-8bf6-475d-82b1-c927a3c7317b,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3949e1e589d6de8f336d9514c13a17e20b8e77323cd06ebc8c00867db7c45eb4,PodSandboxId:1672536
cad4dd9773bcf09f09b181a78fbd046231903bdc49d51247632bf214d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761381035579365038,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ababc1-07e4-4d36-89b2-8a6c8d29de6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e76d1b0e30682069f4cd18ed6791c5929f8b1dac8174f741c4d364a982c223e,PodSandboxId:ac9360fccaa24d400f5
b9b3a72030c724d9a225b0b1e521fc0eff7041ca03d52,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761381024888422175,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wk56k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1147dfe5-42e8-493d-b71e-b18c2dccea1a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1290b3d741f76073327b7413370692cb5c197f53e359b26187abcb0025aeffbb,PodSandboxId:3309bf2556203c8c0193de8cdcb28ef8c8dfd5a80032b5a824cb6bf4e91fdde3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761381023562519757,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nzdhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3cd3e35-b924-472f-9218-233cdce69396,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4570839da44e8b3a2fa055fe684460608e271192fd6c44ac946666456795adcc,PodSandboxId:498a148b1fa089187f01fbd98131c676133adc17c66049a4afb37ef4b8e72b79,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761381012442655834,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16e8a53d09a7fbb3a5e534717096a964,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"ho
stPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9cf074fa9b3410489789b50b5dde599ca5b43a9ac50ecb4c836d80d3338c955,PodSandboxId:44ec30afdfe0e08068898f9d8da354d73493db2d38636ad705a4c7ea0ebe7f85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761381012465811659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 690e6a5b2418f0ca9b6f3c0b414ff231,},Annotations:map[string]str
ing{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:577f1fc6b5aa6028e9ec35cacc97e7a58a56550d7b5e0161bce35c86154ebe5d,PodSandboxId:dfa1994a628a0947a42108e7de8545ea85ae551aaf7fd5d96bdd8fac5fe92db9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761381012407959375,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons
-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe261af37f4a81df64519dd9c14a22d0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:012875c13f0610cc9ba2625b34d3309c83847bdd8a40fda7f3741223f8077fe4,PodSandboxId:18ce3d8a538afdbfe1a0e6410d96f13b142763fa6ff6f6dfb71d21d8f2f0d8fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761381012384741295,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732e721612489f67f85d112768938e2f,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=daa0bc1e-e644-44f1-9b06-3a7c9137a586 name=/runtime.v1.RuntimeService/ListContainers
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.946719615Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=77cbe99c-68e3-4cc9-811c-8bb1fb285650 name=/runtime.v1.RuntimeService/Version
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.946811601Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=77cbe99c-68e3-4cc9-811c-8bb1fb285650 name=/runtime.v1.RuntimeService/Version
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.948441725Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9ec5343-7664-4cd4-98d6-9df2fb3195da name=/runtime.v1.ImageService/ImageFsInfo
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.950177285Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761381323950147190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588896,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9ec5343-7664-4cd4-98d6-9df2fb3195da name=/runtime.v1.ImageService/ImageFsInfo
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.950960713Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3468038-10e1-4cd2-a28c-7b93cc6e6403 name=/runtime.v1.RuntimeService/ListContainers
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.951286998Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3468038-10e1-4cd2-a28c-7b93cc6e6403 name=/runtime.v1.RuntimeService/ListContainers
Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.951872900Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba649ef510c320f775909d4460e56a36b46f3dd31e59195bf71be7cff2f62a8b,PodSandboxId:c52e3bd825ede126abab9b4d6468adc65b010ad5689b8128806a26f2a6e31914,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761381180140542495,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2c09d0ba-4bcf-41ee-a6df-0ac2dfc801a8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f2511eac37cc062cd57134739c7258c0d93ee71f123a7973659c9bdcbb2efb,PodSandboxId:05b61984d2b069f30739e6da42a76fe9219b7c33cf6487494aacbca1e8e271b2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761381137241231973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f2f6f1d-47e1-4920-87aa-ea653b62155e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668d1cf35f06cbd179f51d51212cb1bea413f03a7041ae3de44d0fa8ee001d0e,PodSandboxId:a441975f9c54e4ab33d49dc47071c868f1c084d831eb92a69c10061efd52104f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761381107092340480,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-mfkds,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50e82f06-39c8-4f72-a0a5-6b4703790748,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:e3d9d99a48fa27dd71568ac5165f17fac6ae24ac5b9895d27349cc8811fbb173,PodSandboxId:ed7501f8db956f8ab57934a1b3de2400f821fc9352275b3ad2c886c61fccb16d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761381088955281135,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rmrl2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bb255d16-2e85-41e8-9ed9-a35ce6b6acc9,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66b7d625c06ccc43ff637a53c6aef306e50d0948f7c40895b1b8476ffcfa1535,PodSandboxId:edaf6cd0334deabc8ea039f6c3ad31e25477396185fca88c4969c154f9a15a3c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761381088853541935,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-29xlb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50c6c028-9478-4d0e-b417-21f918310c81,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c32668b06a8dda97480854551a3f509075d440bfcf9c2837afc7a9eeabe0c9,PodSandboxId:bb605a2e21acae12f5471fa897aa6f3b0927b5d2cda229731b0de557b2af9d3f,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761381073172437309,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-gg64c,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 274413d3-cf62-4e8a-a462-c34623a92df7,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d454023a5e021e53842e04e29085483d3903d0cd3904d318a4795e26495578ae,PodSandboxId:b879b682b9e49d6c4d61e3a94fe3a99f8c192e266af6518020571712be146631,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761381060198237535,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8340127-56b4-4638-b7ca-1a5815a313cc,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721e2f83faa260a0aa770486fb488a646a740f18306894e3c98ad4cfab67ff2a,PodSandboxId:c1d7a85a1a0663dccb16e2d1c8c4023f1d1d232f349461d
c32f7d5326db133db,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761381043605417893,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-frvrc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201f5833-8bf6-475d-82b1-c927a3c7317b,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3949e1e589d6de8f336d9514c13a17e20b8e77323cd06ebc8c00867db7c45eb4,PodSandboxId:1672536
cad4dd9773bcf09f09b181a78fbd046231903bdc49d51247632bf214d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761381035579365038,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ababc1-07e4-4d36-89b2-8a6c8d29de6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e76d1b0e30682069f4cd18ed6791c5929f8b1dac8174f741c4d364a982c223e,PodSandboxId:ac9360fccaa24d400f5
b9b3a72030c724d9a225b0b1e521fc0eff7041ca03d52,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761381024888422175,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wk56k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1147dfe5-42e8-493d-b71e-b18c2dccea1a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1290b3d741f76073327b7413370692cb5c197f53e359b26187abcb0025aeffbb,PodSandboxId:3309bf2556203c8c0193de8cdcb28ef8c8dfd5a80032b5a824cb6bf4e91fdde3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761381023562519757,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nzdhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3cd3e35-b924-472f-9218-233cdce69396,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4570839da44e8b3a2fa055fe684460608e271192fd6c44ac946666456795adcc,PodSandboxId:498a148b1fa089187f01fbd98131c676133adc17c66049a4afb37ef4b8e72b79,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761381012442655834,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16e8a53d09a7fbb3a5e534717096a964,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"ho
stPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9cf074fa9b3410489789b50b5dde599ca5b43a9ac50ecb4c836d80d3338c955,PodSandboxId:44ec30afdfe0e08068898f9d8da354d73493db2d38636ad705a4c7ea0ebe7f85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761381012465811659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 690e6a5b2418f0ca9b6f3c0b414ff231,},Annotations:map[string]str
ing{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:577f1fc6b5aa6028e9ec35cacc97e7a58a56550d7b5e0161bce35c86154ebe5d,PodSandboxId:dfa1994a628a0947a42108e7de8545ea85ae551aaf7fd5d96bdd8fac5fe92db9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761381012407959375,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons
-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe261af37f4a81df64519dd9c14a22d0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:012875c13f0610cc9ba2625b34d3309c83847bdd8a40fda7f3741223f8077fe4,PodSandboxId:18ce3d8a538afdbfe1a0e6410d96f13b142763fa6ff6f6dfb71d21d8f2f0d8fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761381012384741295,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732e721612489f67f85d112768938e2f,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3468038-10e1-4cd2-a28c-7b93cc6e6403 name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
ba649ef510c32 docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22 2 minutes ago Running nginx 0 c52e3bd825ede nginx
51f2511eac37c gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 3 minutes ago Running busybox 0 05b61984d2b06 busybox
668d1cf35f06c registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd 3 minutes ago Running controller 0 a441975f9c54e ingress-nginx-controller-675c5ddd98-mfkds
e3d9d99a48fa2 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39 3 minutes ago Exited patch 0 ed7501f8db956 ingress-nginx-admission-patch-rmrl2
66b7d625c06cc registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39 3 minutes ago Exited create 0 edaf6cd0334de ingress-nginx-admission-create-29xlb
c7c32668b06a8 ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb 4 minutes ago Running gadget 0 bb605a2e21aca gadget-gg64c
d454023a5e021 docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 4 minutes ago Running minikube-ingress-dns 0 b879b682b9e49 kube-ingress-dns-minikube
721e2f83faa26 docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 c1d7a85a1a066 amd-gpu-device-plugin-frvrc
3949e1e589d6d 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 1672536cad4dd storage-provisioner
0e76d1b0e3068 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 ac9360fccaa24 coredns-66bc5c9577-wk56k
1290b3d741f76 fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7 5 minutes ago Running kube-proxy 0 3309bf2556203 kube-proxy-nzdhm
a9cf074fa9b34 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813 5 minutes ago Running kube-scheduler 0 44ec30afdfe0e kube-scheduler-addons-631036
4570839da44e8 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115 5 minutes ago Running etcd 0 498a148b1fa08 etcd-addons-631036
577f1fc6b5aa6 c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f 5 minutes ago Running kube-controller-manager 0 dfa1994a628a0 kube-controller-manager-addons-631036
012875c13f061 c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97 5 minutes ago Running kube-apiserver 0 18ce3d8a538af kube-apiserver-addons-631036
==> coredns [0e76d1b0e30682069f4cd18ed6791c5929f8b1dac8174f741c4d364a982c223e] <==
[INFO] 10.244.0.8:53549 - 54721 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000080295s
[INFO] 10.244.0.8:53549 - 17715 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000225624s
[INFO] 10.244.0.8:53549 - 29869 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000066707s
[INFO] 10.244.0.8:53549 - 38204 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000212605s
[INFO] 10.244.0.8:53549 - 47698 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000072373s
[INFO] 10.244.0.8:53549 - 8203 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000107784s
[INFO] 10.244.0.8:53549 - 14630 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00024967s
[INFO] 10.244.0.8:38244 - 15356 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000121357s
[INFO] 10.244.0.8:38244 - 15636 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000088229s
[INFO] 10.244.0.8:45168 - 53909 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00009541s
[INFO] 10.244.0.8:45168 - 54212 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008685s
[INFO] 10.244.0.8:47589 - 2645 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059858s
[INFO] 10.244.0.8:47589 - 2895 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000062418s
[INFO] 10.244.0.8:57234 - 17719 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000249377s
[INFO] 10.244.0.8:57234 - 18140 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00013674s
[INFO] 10.244.0.23:49170 - 64485 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000836695s
[INFO] 10.244.0.23:43288 - 8095 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000208942s
[INFO] 10.244.0.23:47920 - 10132 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000113568s
[INFO] 10.244.0.23:56184 - 54988 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000087744s
[INFO] 10.244.0.23:44694 - 44820 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000206826s
[INFO] 10.244.0.23:57625 - 62687 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125151s
[INFO] 10.244.0.23:58662 - 25286 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001457941s
[INFO] 10.244.0.23:53651 - 42374 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001146083s
[INFO] 10.244.0.27:45721 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000543779s
[INFO] 10.244.0.27:59460 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000204923s
==> describe nodes <==
Name: addons-631036
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-631036
kubernetes.io/os=linux
minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
minikube.k8s.io/name=addons-631036
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_10_25T08_30_18_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-631036
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 25 Oct 2025 08:30:15 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-631036
AcquireTime: <unset>
RenewTime: Sat, 25 Oct 2025 08:35:14 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 25 Oct 2025 08:33:22 +0000 Sat, 25 Oct 2025 08:30:12 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 25 Oct 2025 08:33:22 +0000 Sat, 25 Oct 2025 08:30:12 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 25 Oct 2025 08:33:22 +0000 Sat, 25 Oct 2025 08:30:12 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 25 Oct 2025 08:33:22 +0000 Sat, 25 Oct 2025 08:30:19 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.24
Hostname: addons-631036
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4008588Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4008588Ki
pods: 110
System Info:
Machine ID: 47cdcab0e8ea48b5a70c5c459d82a833
System UUID: 47cdcab0-e8ea-48b5-a70c-5c459d82a833
Boot ID: ddcee597-ce31-4c7f-9e40-372d0f38163a
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m12s
default hello-world-app-5d498dc89-m9rs7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m29s
gadget gadget-gg64c 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m53s
ingress-nginx ingress-nginx-controller-675c5ddd98-mfkds 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m52s
kube-system amd-gpu-device-plugin-frvrc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m58s
kube-system coredns-66bc5c9577-wk56k 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 5m1s
kube-system etcd-addons-631036 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 5m6s
kube-system kube-apiserver-addons-631036 250m (12%) 0 (0%) 0 (0%) 0 (0%) 5m7s
kube-system kube-controller-manager-addons-631036 200m (10%) 0 (0%) 0 (0%) 0 (0%) 5m6s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m55s
kube-system kube-proxy-nzdhm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m1s
kube-system kube-scheduler-addons-631036 100m (5%) 0 (0%) 0 (0%) 0 (0%) 5m8s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m53s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m59s kube-proxy
Normal NodeHasSufficientMemory 5m13s (x8 over 5m13s) kubelet Node addons-631036 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m13s (x8 over 5m13s) kubelet Node addons-631036 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m13s (x7 over 5m13s) kubelet Node addons-631036 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m13s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m6s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 5m6s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 5m6s kubelet Node addons-631036 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m6s kubelet Node addons-631036 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m6s kubelet Node addons-631036 status is now: NodeHasSufficientPID
Normal NodeReady 5m5s kubelet Node addons-631036 status is now: NodeReady
Normal RegisteredNode 5m2s node-controller Node addons-631036 event: Registered Node addons-631036 in Controller
==> dmesg <==
[ +0.000031] kauditd_printk_skb: 369 callbacks suppressed
[ +10.445317] kauditd_printk_skb: 142 callbacks suppressed
[Oct25 08:31] kauditd_printk_skb: 11 callbacks suppressed
[ +7.734058] kauditd_printk_skb: 32 callbacks suppressed
[ +5.242656] kauditd_printk_skb: 11 callbacks suppressed
[ +5.908158] kauditd_printk_skb: 20 callbacks suppressed
[ +4.262615] kauditd_printk_skb: 32 callbacks suppressed
[ +5.104054] kauditd_printk_skb: 65 callbacks suppressed
[ +0.711444] kauditd_printk_skb: 141 callbacks suppressed
[ +0.000286] kauditd_printk_skb: 93 callbacks suppressed
[ +5.520943] kauditd_printk_skb: 26 callbacks suppressed
[ +11.856470] kauditd_printk_skb: 38 callbacks suppressed
[Oct25 08:32] kauditd_printk_skb: 2 callbacks suppressed
[ +14.007694] kauditd_printk_skb: 41 callbacks suppressed
[ +6.044518] kauditd_printk_skb: 22 callbacks suppressed
[ +5.492281] kauditd_printk_skb: 44 callbacks suppressed
[ +2.460316] kauditd_printk_skb: 150 callbacks suppressed
[ +0.412509] kauditd_printk_skb: 152 callbacks suppressed
[ +0.180278] kauditd_printk_skb: 156 callbacks suppressed
[Oct25 08:33] kauditd_printk_skb: 61 callbacks suppressed
[ +5.949955] kauditd_printk_skb: 26 callbacks suppressed
[ +6.431379] kauditd_printk_skb: 5 callbacks suppressed
[ +0.000071] kauditd_printk_skb: 30 callbacks suppressed
[ +7.562907] kauditd_printk_skb: 41 callbacks suppressed
[Oct25 08:35] kauditd_printk_skb: 127 callbacks suppressed
==> etcd [4570839da44e8b3a2fa055fe684460608e271192fd6c44ac946666456795adcc] <==
{"level":"info","ts":"2025-10-25T08:30:58.602733Z","caller":"traceutil/trace.go:172","msg":"trace[604932350] transaction","detail":"{read_only:false; response_revision:955; number_of_response:1; }","duration":"203.775922ms","start":"2025-10-25T08:30:58.398942Z","end":"2025-10-25T08:30:58.602718Z","steps":["trace[604932350] 'process raft request' (duration: 203.686371ms)"],"step_count":1}
{"level":"info","ts":"2025-10-25T08:31:05.831027Z","caller":"traceutil/trace.go:172","msg":"trace[1622288266] transaction","detail":"{read_only:false; response_revision:979; number_of_response:1; }","duration":"106.39306ms","start":"2025-10-25T08:31:05.724622Z","end":"2025-10-25T08:31:05.831015Z","steps":["trace[1622288266] 'process raft request' (duration: 106.303926ms)"],"step_count":1}
{"level":"info","ts":"2025-10-25T08:31:07.414848Z","caller":"traceutil/trace.go:172","msg":"trace[2139928386] transaction","detail":"{read_only:false; response_revision:982; number_of_response:1; }","duration":"151.355643ms","start":"2025-10-25T08:31:07.263479Z","end":"2025-10-25T08:31:07.414835Z","steps":["trace[2139928386] 'process raft request' (duration: 149.538864ms)"],"step_count":1}
{"level":"info","ts":"2025-10-25T08:31:17.528057Z","caller":"traceutil/trace.go:172","msg":"trace[103179203] transaction","detail":"{read_only:false; response_revision:1005; number_of_response:1; }","duration":"137.753023ms","start":"2025-10-25T08:31:17.390282Z","end":"2025-10-25T08:31:17.528035Z","steps":["trace[103179203] 'process raft request' (duration: 137.637654ms)"],"step_count":1}
{"level":"info","ts":"2025-10-25T08:31:24.617904Z","caller":"traceutil/trace.go:172","msg":"trace[1806828665] transaction","detail":"{read_only:false; response_revision:1037; number_of_response:1; }","duration":"327.561656ms","start":"2025-10-25T08:31:24.290329Z","end":"2025-10-25T08:31:24.617890Z","steps":["trace[1806828665] 'process raft request' (duration: 327.430368ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-25T08:31:24.618695Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-25T08:31:24.290303Z","time spent":"327.675054ms","remote":"127.0.0.1:35560","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3995,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/local-path-storage/local-path-provisioner-648f6765c9-rdtcc\" mod_revision:636 > success:<request_put:<key:\"/registry/pods/local-path-storage/local-path-provisioner-648f6765c9-rdtcc\" value_size:3914 >> failure:<request_range:<key:\"/registry/pods/local-path-storage/local-path-provisioner-648f6765c9-rdtcc\" > >"}
{"level":"info","ts":"2025-10-25T08:31:30.331568Z","caller":"traceutil/trace.go:172","msg":"trace[168738082] transaction","detail":"{read_only:false; response_revision:1075; number_of_response:1; }","duration":"139.190952ms","start":"2025-10-25T08:31:30.192364Z","end":"2025-10-25T08:31:30.331555Z","steps":["trace[168738082] 'process raft request' (duration: 139.077776ms)"],"step_count":1}
{"level":"info","ts":"2025-10-25T08:31:45.295270Z","caller":"traceutil/trace.go:172","msg":"trace[1724880962] linearizableReadLoop","detail":"{readStateIndex:1199; appliedIndex:1199; }","duration":"252.970603ms","start":"2025-10-25T08:31:45.042257Z","end":"2025-10-25T08:31:45.295227Z","steps":["trace[1724880962] 'read index received' (duration: 252.954898ms)","trace[1724880962] 'applied index is now lower than readState.Index' (duration: 14.368µs)"],"step_count":2}
{"level":"info","ts":"2025-10-25T08:31:45.295354Z","caller":"traceutil/trace.go:172","msg":"trace[1337160178] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"263.746854ms","start":"2025-10-25T08:31:45.031596Z","end":"2025-10-25T08:31:45.295342Z","steps":["trace[1337160178] 'process raft request' (duration: 263.648163ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-25T08:31:45.295460Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"253.167718ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-10-25T08:31:45.295486Z","caller":"traceutil/trace.go:172","msg":"trace[1478105519] range","detail":"{range_begin:/registry/daemonsets; range_end:; response_count:0; response_revision:1162; }","duration":"253.224441ms","start":"2025-10-25T08:31:45.042252Z","end":"2025-10-25T08:31:45.295477Z","steps":["trace[1478105519] 'agreement among raft nodes before linearized reading' (duration: 253.134425ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-25T08:31:45.295656Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.966042ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-10-25T08:31:45.295675Z","caller":"traceutil/trace.go:172","msg":"trace[1318672182] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1162; }","duration":"121.987244ms","start":"2025-10-25T08:31:45.173683Z","end":"2025-10-25T08:31:45.295670Z","steps":["trace[1318672182] 'agreement among raft nodes before linearized reading' (duration: 121.952488ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-25T08:31:45.295766Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.989807ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-10-25T08:31:45.295799Z","caller":"traceutil/trace.go:172","msg":"trace[469332901] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1162; }","duration":"120.022851ms","start":"2025-10-25T08:31:45.175771Z","end":"2025-10-25T08:31:45.295794Z","steps":["trace[469332901] 'agreement among raft nodes before linearized reading' (duration: 119.979282ms)"],"step_count":1}
{"level":"info","ts":"2025-10-25T08:32:41.045385Z","caller":"traceutil/trace.go:172","msg":"trace[1100788515] transaction","detail":"{read_only:false; response_revision:1417; number_of_response:1; }","duration":"148.074185ms","start":"2025-10-25T08:32:40.897277Z","end":"2025-10-25T08:32:41.045351Z","steps":["trace[1100788515] 'process raft request' (duration: 147.17463ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-25T08:32:46.687918Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.338137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-10-25T08:32:46.688015Z","caller":"traceutil/trace.go:172","msg":"trace[31958330] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1472; }","duration":"161.447285ms","start":"2025-10-25T08:32:46.526553Z","end":"2025-10-25T08:32:46.688000Z","steps":["trace[31958330] 'range keys from in-memory index tree' (duration: 161.245951ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-25T08:32:46.688384Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.06948ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/cloud-spanner-emulator-86bd5cbb97-zg648\" limit:1 ","response":"range_response_count:1 size:3721"}
{"level":"info","ts":"2025-10-25T08:32:46.688412Z","caller":"traceutil/trace.go:172","msg":"trace[1703085354] range","detail":"{range_begin:/registry/pods/default/cloud-spanner-emulator-86bd5cbb97-zg648; range_end:; response_count:1; response_revision:1472; }","duration":"103.104819ms","start":"2025-10-25T08:32:46.585299Z","end":"2025-10-25T08:32:46.688404Z","steps":["trace[1703085354] 'range keys from in-memory index tree' (duration: 102.992464ms)"],"step_count":1}
{"level":"info","ts":"2025-10-25T08:33:16.030866Z","caller":"traceutil/trace.go:172","msg":"trace[1186067016] transaction","detail":"{read_only:false; response_revision:1671; number_of_response:1; }","duration":"283.477657ms","start":"2025-10-25T08:33:15.747364Z","end":"2025-10-25T08:33:16.030842Z","steps":["trace[1186067016] 'process raft request' (duration: 283.30274ms)"],"step_count":1}
{"level":"info","ts":"2025-10-25T08:33:52.416870Z","caller":"traceutil/trace.go:172","msg":"trace[2063075386] linearizableReadLoop","detail":"{readStateIndex:1982; appliedIndex:1982; }","duration":"141.779944ms","start":"2025-10-25T08:33:52.274963Z","end":"2025-10-25T08:33:52.416743Z","steps":["trace[2063075386] 'read index received' (duration: 141.770696ms)","trace[2063075386] 'applied index is now lower than readState.Index' (duration: 8.097µs)"],"step_count":2}
{"level":"info","ts":"2025-10-25T08:33:52.416905Z","caller":"traceutil/trace.go:172","msg":"trace[406845407] transaction","detail":"{read_only:false; response_revision:1908; number_of_response:1; }","duration":"144.640836ms","start":"2025-10-25T08:33:52.272253Z","end":"2025-10-25T08:33:52.416894Z","steps":["trace[406845407] 'process raft request' (duration: 144.533598ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-25T08:33:52.417174Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"142.087028ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/csi-resizer-role-cfg\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-10-25T08:33:52.417204Z","caller":"traceutil/trace.go:172","msg":"trace[641442943] range","detail":"{range_begin:/registry/rolebindings/kube-system/csi-resizer-role-cfg; range_end:; response_count:0; response_revision:1908; }","duration":"142.235113ms","start":"2025-10-25T08:33:52.274959Z","end":"2025-10-25T08:33:52.417194Z","steps":["trace[641442943] 'agreement among raft nodes before linearized reading' (duration: 142.063737ms)"],"step_count":1}
==> kernel <==
08:35:24 up 5 min, 0 users, load average: 0.34, 1.11, 0.61
Linux addons-631036 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [012875c13f0610cc9ba2625b34d3309c83847bdd8a40fda7f3741223f8077fe4] <==
> logger="UnhandledError"
E1025 08:31:09.085758 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
E1025 08:32:23.959914 1 conn.go:339] Error on socket receive: read tcp 192.168.39.24:8443->192.168.39.1:46978: use of closed network connection
E1025 08:32:24.160264 1 conn.go:339] Error on socket receive: read tcp 192.168.39.24:8443->192.168.39.1:47022: use of closed network connection
I1025 08:32:33.407518 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.59.0"}
I1025 08:32:55.830034 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1025 08:32:56.043023 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.166.38"}
E1025 08:33:09.194645 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
I1025 08:33:10.051255 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1025 08:33:23.312242 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
E1025 08:33:25.222145 1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
I1025 08:33:47.211129 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1025 08:33:47.211415 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1025 08:33:47.259184 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1025 08:33:47.259250 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1025 08:33:47.418802 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1025 08:33:47.418860 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1025 08:33:47.423691 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1025 08:33:47.423724 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1025 08:33:47.463556 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1025 08:33:47.465203 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1025 08:33:48.429877 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1025 08:33:48.464160 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
W1025 08:33:48.574269 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
I1025 08:35:22.755312 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.53.92"}
==> kube-controller-manager [577f1fc6b5aa6028e9ec35cacc97e7a58a56550d7b5e0161bce35c86154ebe5d] <==
I1025 08:33:52.477243 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
E1025 08:33:55.428851 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 08:33:55.430011 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1025 08:33:55.482545 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 08:33:55.483574 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1025 08:33:57.386643 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 08:33:57.387751 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1025 08:34:04.711765 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 08:34:04.712823 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1025 08:34:06.123920 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 08:34:06.125208 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1025 08:34:07.389899 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 08:34:07.391243 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1025 08:34:19.197184 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 08:34:19.198266 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1025 08:34:28.624743 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 08:34:28.626253 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1025 08:34:29.447481 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 08:34:29.448619 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1025 08:35:07.478985 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 08:35:07.480044 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1025 08:35:09.331594 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 08:35:09.332682 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1025 08:35:17.457463 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 08:35:17.458954 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [1290b3d741f76073327b7413370692cb5c197f53e359b26187abcb0025aeffbb] <==
I1025 08:30:24.039769 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1025 08:30:24.140798 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1025 08:30:24.140908 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.24"]
E1025 08:30:24.141245 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1025 08:30:24.247490 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1025 08:30:24.247567 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1025 08:30:24.247680 1 server_linux.go:132] "Using iptables Proxier"
I1025 08:30:24.267477 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1025 08:30:24.267808 1 server.go:527] "Version info" version="v1.34.1"
I1025 08:30:24.267822 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1025 08:30:24.276569 1 config.go:200] "Starting service config controller"
I1025 08:30:24.276583 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1025 08:30:24.276613 1 config.go:106] "Starting endpoint slice config controller"
I1025 08:30:24.276616 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1025 08:30:24.276632 1 config.go:403] "Starting serviceCIDR config controller"
I1025 08:30:24.276635 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1025 08:30:24.284955 1 config.go:309] "Starting node config controller"
I1025 08:30:24.288174 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1025 08:30:24.288189 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1025 08:30:24.378750 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1025 08:30:24.378826 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1025 08:30:24.378852 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [a9cf074fa9b3410489789b50b5dde599ca5b43a9ac50ecb4c836d80d3338c955] <==
E1025 08:30:15.382268 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1025 08:30:15.384691 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1025 08:30:15.386494 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1025 08:30:15.386601 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1025 08:30:15.386659 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1025 08:30:15.386737 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1025 08:30:15.386789 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1025 08:30:15.386841 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1025 08:30:15.386868 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1025 08:30:15.386981 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1025 08:30:15.387006 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1025 08:30:15.387039 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1025 08:30:15.387155 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1025 08:30:16.221586 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1025 08:30:16.336207 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1025 08:30:16.352288 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1025 08:30:16.371236 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1025 08:30:16.416338 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1025 08:30:16.489780 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1025 08:30:16.657868 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1025 08:30:16.684884 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1025 08:30:16.712362 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1025 08:30:16.728591 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1025 08:30:16.745856 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
I1025 08:30:19.461614 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Oct 25 08:33:50 addons-631036 kubelet[1497]: I1025 08:33:50.369163 1497 scope.go:117] "RemoveContainer" containerID="37d0252b798b04a246b16645698f61b5ae1d84a5e82c86e91b2aec2897aa617f"
Oct 25 08:33:50 addons-631036 kubelet[1497]: I1025 08:33:50.369764 1497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37d0252b798b04a246b16645698f61b5ae1d84a5e82c86e91b2aec2897aa617f"} err="failed to get container status \"37d0252b798b04a246b16645698f61b5ae1d84a5e82c86e91b2aec2897aa617f\": rpc error: code = NotFound desc = could not find container \"37d0252b798b04a246b16645698f61b5ae1d84a5e82c86e91b2aec2897aa617f\": container with ID starting with 37d0252b798b04a246b16645698f61b5ae1d84a5e82c86e91b2aec2897aa617f not found: ID does not exist"
Oct 25 08:33:50 addons-631036 kubelet[1497]: I1025 08:33:50.369799 1497 scope.go:117] "RemoveContainer" containerID="88745cacc76412ac74caf33d8464b1c7827fdb02607ccc10346fbf66839f4cd8"
Oct 25 08:33:50 addons-631036 kubelet[1497]: I1025 08:33:50.371635 1497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88745cacc76412ac74caf33d8464b1c7827fdb02607ccc10346fbf66839f4cd8"} err="failed to get container status \"88745cacc76412ac74caf33d8464b1c7827fdb02607ccc10346fbf66839f4cd8\": rpc error: code = NotFound desc = could not find container \"88745cacc76412ac74caf33d8464b1c7827fdb02607ccc10346fbf66839f4cd8\": container with ID starting with 88745cacc76412ac74caf33d8464b1c7827fdb02607ccc10346fbf66839f4cd8 not found: ID does not exist"
Oct 25 08:33:58 addons-631036 kubelet[1497]: E1025 08:33:58.444491 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761381238443985777 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588896} inodes_used:{value:201}}"
Oct 25 08:33:58 addons-631036 kubelet[1497]: E1025 08:33:58.444574 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761381238443985777 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588896} inodes_used:{value:201}}"
Oct 25 08:34:08 addons-631036 kubelet[1497]: E1025 08:34:08.450268 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761381248449840636 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588896} inodes_used:{value:201}}"
Oct 25 08:34:08 addons-631036 kubelet[1497]: E1025 08:34:08.450305 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761381248449840636 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588896} inodes_used:{value:201}}"
Oct 25 08:34:18 addons-631036 kubelet[1497]: E1025 08:34:18.452753 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761381258452442735 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588896} inodes_used:{value:201}}"
Oct 25 08:34:18 addons-631036 kubelet[1497]: E1025 08:34:18.452792 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761381258452442735 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588896} inodes_used:{value:201}}"
Oct 25 08:34:28 addons-631036 kubelet[1497]: E1025 08:34:28.455960 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761381268455351715 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588896} inodes_used:{value:201}}"
Oct 25 08:34:28 addons-631036 kubelet[1497]: E1025 08:34:28.456005 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761381268455351715 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588896} inodes_used:{value:201}}"
Oct 25 08:34:38 addons-631036 kubelet[1497]: I1025 08:34:38.215263 1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Oct 25 08:34:38 addons-631036 kubelet[1497]: E1025 08:34:38.458724 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761381278458313078 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588896} inodes_used:{value:201}}"
Oct 25 08:34:38 addons-631036 kubelet[1497]: E1025 08:34:38.458753 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761381278458313078 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588896} inodes_used:{value:201}}"
Oct 25 08:34:47 addons-631036 kubelet[1497]: I1025 08:34:47.214911 1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-frvrc" secret="" err="secret \"gcp-auth\" not found"
Oct 25 08:34:48 addons-631036 kubelet[1497]: E1025 08:34:48.460861 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761381288460524942 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588896} inodes_used:{value:201}}"
Oct 25 08:34:48 addons-631036 kubelet[1497]: E1025 08:34:48.460900 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761381288460524942 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588896} inodes_used:{value:201}}"
Oct 25 08:34:58 addons-631036 kubelet[1497]: E1025 08:34:58.463854 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761381298463456569 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588896} inodes_used:{value:201}}"
Oct 25 08:34:58 addons-631036 kubelet[1497]: E1025 08:34:58.463877 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761381298463456569 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588896} inodes_used:{value:201}}"
Oct 25 08:35:08 addons-631036 kubelet[1497]: E1025 08:35:08.467128 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761381308466808152 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588896} inodes_used:{value:201}}"
Oct 25 08:35:08 addons-631036 kubelet[1497]: E1025 08:35:08.467156 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761381308466808152 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588896} inodes_used:{value:201}}"
Oct 25 08:35:18 addons-631036 kubelet[1497]: E1025 08:35:18.469387 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761381318469013643 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588896} inodes_used:{value:201}}"
Oct 25 08:35:18 addons-631036 kubelet[1497]: E1025 08:35:18.469423 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761381318469013643 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588896} inodes_used:{value:201}}"
Oct 25 08:35:22 addons-631036 kubelet[1497]: I1025 08:35:22.721400 1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9glk5\" (UniqueName: \"kubernetes.io/projected/621636ff-a5a1-4705-859c-3adbd54cbb54-kube-api-access-9glk5\") pod \"hello-world-app-5d498dc89-m9rs7\" (UID: \"621636ff-a5a1-4705-859c-3adbd54cbb54\") " pod="default/hello-world-app-5d498dc89-m9rs7"
==> storage-provisioner [3949e1e589d6de8f336d9514c13a17e20b8e77323cd06ebc8c00867db7c45eb4] <==
W1025 08:34:58.798105 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:00.802241 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:00.808130 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:02.811900 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:02.819940 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:04.823024 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:04.829011 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:06.832858 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:06.841784 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:08.846760 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:08.853211 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:10.857516 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:10.865674 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:12.869163 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:12.874853 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:14.878413 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:14.884930 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:16.890260 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:16.897427 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:18.901646 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:18.907521 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:20.911468 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:20.917737 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:22.932888 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 08:35:22.939701 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-631036 -n addons-631036
helpers_test.go:269: (dbg) Run: kubectl --context addons-631036 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-m9rs7 ingress-nginx-admission-create-29xlb ingress-nginx-admission-patch-rmrl2
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context addons-631036 describe pod hello-world-app-5d498dc89-m9rs7 ingress-nginx-admission-create-29xlb ingress-nginx-admission-patch-rmrl2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-631036 describe pod hello-world-app-5d498dc89-m9rs7 ingress-nginx-admission-create-29xlb ingress-nginx-admission-patch-rmrl2: exit status 1 (87.816598ms)
-- stdout --
Name: hello-world-app-5d498dc89-m9rs7
Namespace: default
Priority: 0
Service Account: default
Node: addons-631036/192.168.39.24
Start Time: Sat, 25 Oct 2025 08:35:22 +0000
Labels: app=hello-world-app
pod-template-hash=5d498dc89
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-5d498dc89
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9glk5 (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-9glk5:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3s default-scheduler Successfully assigned default/hello-world-app-5d498dc89-m9rs7 to addons-631036
Normal Pulling 2s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-29xlb" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-rmrl2" not found
** /stderr **
helpers_test.go:287: kubectl --context addons-631036 describe pod hello-world-app-5d498dc89-m9rs7 ingress-nginx-admission-create-29xlb ingress-nginx-admission-patch-rmrl2: exit status 1
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-631036 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-631036 addons disable ingress-dns --alsologtostderr -v=1: (1.710359123s)
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-631036 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-631036 addons disable ingress --alsologtostderr -v=1: (7.795867872s)
--- FAIL: TestAddons/parallel/Ingress (159.19s)