=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run: kubectl --context addons-192357 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run: kubectl --context addons-192357 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run: kubectl --context addons-192357 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [483f2351-0a72-4e13-a1e4-258f9c460626] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [483f2351-0a72-4e13-a1e4-258f9c460626] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.003248824s
I1025 09:35:55.479295 518586 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run: out/minikube-linux-amd64 -p addons-192357 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-192357 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.267223999s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run: kubectl --context addons-192357 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run: out/minikube-linux-amd64 -p addons-192357 ip
addons_test.go:299: (dbg) Run: nslookup hello-john.test 192.168.39.24
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-192357 -n addons-192357
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p addons-192357 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-192357 logs -n 25: (1.138218183s)
helpers_test.go:260: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-131119 │ download-only-131119 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
│ start │ --download-only -p binary-mirror-221432 --alsologtostderr --binary-mirror http://127.0.0.1:34333 --driver=kvm2 --container-runtime=crio │ binary-mirror-221432 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ │
│ delete │ -p binary-mirror-221432 │ binary-mirror-221432 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
│ addons │ disable dashboard -p addons-192357 │ addons-192357 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ │
│ addons │ enable dashboard -p addons-192357 │ addons-192357 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ │
│ start │ -p addons-192357 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-192357 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:34 UTC │
│ addons │ addons-192357 addons disable volcano --alsologtostderr -v=1 │ addons-192357 │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
│ addons │ addons-192357 addons disable gcp-auth --alsologtostderr -v=1 │ addons-192357 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
│ addons │ enable headlamp -p addons-192357 --alsologtostderr -v=1 │ addons-192357 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
│ addons │ addons-192357 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-192357 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
│ ssh │ addons-192357 ssh cat /opt/local-path-provisioner/pvc-4e6c1e1a-b414-4ac6-b166-5b5da4fcd5ae_default_test-pvc/file1 │ addons-192357 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
│ addons │ addons-192357 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-192357 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:36 UTC │
│ addons │ addons-192357 addons disable headlamp --alsologtostderr -v=1 │ addons-192357 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
│ ip │ addons-192357 ip │ addons-192357 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
│ addons │ addons-192357 addons disable registry --alsologtostderr -v=1 │ addons-192357 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
│ addons │ addons-192357 addons disable metrics-server --alsologtostderr -v=1 │ addons-192357 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
│ addons │ addons-192357 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-192357 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-192357 │ addons-192357 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
│ addons │ addons-192357 addons disable registry-creds --alsologtostderr -v=1 │ addons-192357 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
│ addons │ addons-192357 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-192357 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
│ ssh │ addons-192357 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-192357 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ │
│ addons │ addons-192357 addons disable yakd --alsologtostderr -v=1 │ addons-192357 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:36 UTC │
│ addons │ addons-192357 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-192357 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
│ addons │ addons-192357 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-192357 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
│ ip │ addons-192357 ip │ addons-192357 │ jenkins │ v1.37.0 │ 25 Oct 25 09:38 UTC │ 25 Oct 25 09:38 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/10/25 09:31:40
Running on machine: ubuntu-20-agent-10
Binary: Built with gc go1.24.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1025 09:31:40.537143 519259 out.go:360] Setting OutFile to fd 1 ...
I1025 09:31:40.537242 519259 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:31:40.537250 519259 out.go:374] Setting ErrFile to fd 2...
I1025 09:31:40.537254 519259 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:31:40.537440 519259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-514677/.minikube/bin
I1025 09:31:40.538389 519259 out.go:368] Setting JSON to false
I1025 09:31:40.539464 519259 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8037,"bootTime":1761376664,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1025 09:31:40.539570 519259 start.go:141] virtualization: kvm guest
I1025 09:31:40.540899 519259 out.go:179] * [addons-192357] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1025 09:31:40.542036 519259 out.go:179] - MINIKUBE_LOCATION=21767
I1025 09:31:40.542039 519259 notify.go:220] Checking for updates...
I1025 09:31:40.543791 519259 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1025 09:31:40.544740 519259 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21767-514677/kubeconfig
I1025 09:31:40.545668 519259 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-514677/.minikube
I1025 09:31:40.546602 519259 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1025 09:31:40.547527 519259 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1025 09:31:40.548606 519259 driver.go:421] Setting default libvirt URI to qemu:///system
I1025 09:31:40.578303 519259 out.go:179] * Using the kvm2 driver based on user configuration
I1025 09:31:40.579246 519259 start.go:305] selected driver: kvm2
I1025 09:31:40.579259 519259 start.go:925] validating driver "kvm2" against <nil>
I1025 09:31:40.579269 519259 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1025 09:31:40.579967 519259 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1025 09:31:40.580263 519259 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1025 09:31:40.580300 519259 cni.go:84] Creating CNI manager for ""
I1025 09:31:40.580362 519259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1025 09:31:40.580372 519259 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1025 09:31:40.580423 519259 start.go:349] cluster config:
{Name:addons-192357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-192357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1025 09:31:40.580551 519259 iso.go:125] acquiring lock: {Name:mk326c8adc033e0df3de1c0b90db9352d38584ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 09:31:40.581761 519259 out.go:179] * Starting "addons-192357" primary control-plane node in "addons-192357" cluster
I1025 09:31:40.582578 519259 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1025 09:31:40.582610 519259 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-514677/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
I1025 09:31:40.582623 519259 cache.go:58] Caching tarball of preloaded images
I1025 09:31:40.582700 519259 preload.go:233] Found /home/jenkins/minikube-integration/21767-514677/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1025 09:31:40.582715 519259 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
I1025 09:31:40.583060 519259 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/config.json ...
I1025 09:31:40.583082 519259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/config.json: {Name:mkfb9f4b557500a587ee3b67c9a28248056c7d07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 09:31:40.583746 519259 start.go:360] acquireMachinesLock for addons-192357: {Name:mk4afe09d93b6b41a93a0b864727072d17f494f9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1025 09:31:40.583813 519259 start.go:364] duration metric: took 47.13µs to acquireMachinesLock for "addons-192357"
I1025 09:31:40.583837 519259 start.go:93] Provisioning new machine with config: &{Name:addons-192357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-192357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1025 09:31:40.583911 519259 start.go:125] createHost starting for "" (driver="kvm2")
I1025 09:31:40.585029 519259 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1025 09:31:40.585222 519259 start.go:159] libmachine.API.Create for "addons-192357" (driver="kvm2")
I1025 09:31:40.585253 519259 client.go:168] LocalClient.Create starting
I1025 09:31:40.585385 519259 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21767-514677/.minikube/certs/ca.pem
I1025 09:31:40.605825 519259 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21767-514677/.minikube/certs/cert.pem
I1025 09:31:40.711736 519259 main.go:141] libmachine: creating domain...
I1025 09:31:40.711753 519259 main.go:141] libmachine: creating network...
I1025 09:31:40.713172 519259 main.go:141] libmachine: found existing default network
I1025 09:31:40.713369 519259 main.go:141] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1025 09:31:40.714001 519259 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b33ab0}
I1025 09:31:40.714151 519259 main.go:141] libmachine: defining private network:
<network>
<name>mk-addons-192357</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1025 09:31:40.719583 519259 main.go:141] libmachine: creating private network mk-addons-192357 192.168.39.0/24...
I1025 09:31:40.784157 519259 main.go:141] libmachine: private network mk-addons-192357 192.168.39.0/24 created
I1025 09:31:40.784473 519259 main.go:141] libmachine: <network>
<name>mk-addons-192357</name>
<uuid>3331f172-0c52-4bdc-b815-315e9dd0161c</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:a7:f0:b8'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1025 09:31:40.784513 519259 main.go:141] libmachine: setting up store path in /home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357 ...
I1025 09:31:40.784540 519259 main.go:141] libmachine: building disk image from file:///home/jenkins/minikube-integration/21767-514677/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
I1025 09:31:40.784551 519259 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21767-514677/.minikube
I1025 09:31:40.784620 519259 main.go:141] libmachine: Downloading /home/jenkins/minikube-integration/21767-514677/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21767-514677/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
I1025 09:31:41.085893 519259 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/id_rsa...
I1025 09:31:41.283262 519259 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/addons-192357.rawdisk...
I1025 09:31:41.283309 519259 main.go:141] libmachine: Writing magic tar header
I1025 09:31:41.283333 519259 main.go:141] libmachine: Writing SSH key tar header
I1025 09:31:41.283417 519259 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357 ...
I1025 09:31:41.283482 519259 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357
I1025 09:31:41.283536 519259 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357 (perms=drwx------)
I1025 09:31:41.283557 519259 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21767-514677/.minikube/machines
I1025 09:31:41.283567 519259 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21767-514677/.minikube/machines (perms=drwxr-xr-x)
I1025 09:31:41.283582 519259 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21767-514677/.minikube
I1025 09:31:41.283592 519259 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21767-514677/.minikube (perms=drwxr-xr-x)
I1025 09:31:41.283609 519259 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21767-514677
I1025 09:31:41.283620 519259 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21767-514677 (perms=drwxrwxr-x)
I1025 09:31:41.283630 519259 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1025 09:31:41.283639 519259 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1025 09:31:41.283650 519259 main.go:141] libmachine: checking permissions on dir: /home/jenkins
I1025 09:31:41.283659 519259 main.go:141] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1025 09:31:41.283667 519259 main.go:141] libmachine: checking permissions on dir: /home
I1025 09:31:41.283686 519259 main.go:141] libmachine: skipping /home - not owner
I1025 09:31:41.283692 519259 main.go:141] libmachine: defining domain...
I1025 09:31:41.284929 519259 main.go:141] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-192357</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/addons-192357.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-192357'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1025 09:31:41.289805 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:21:a2:3c in network default
I1025 09:31:41.290432 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:41.290450 519259 main.go:141] libmachine: starting domain...
I1025 09:31:41.290454 519259 main.go:141] libmachine: ensuring networks are active...
I1025 09:31:41.291093 519259 main.go:141] libmachine: Ensuring network default is active
I1025 09:31:41.291455 519259 main.go:141] libmachine: Ensuring network mk-addons-192357 is active
I1025 09:31:41.292009 519259 main.go:141] libmachine: getting domain XML...
I1025 09:31:41.293014 519259 main.go:141] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-192357</name>
<uuid>f5973cda-48fd-4955-bdc3-0658b1e0d6a0</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/addons-192357.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:f1:5b:46'/>
<source network='mk-addons-192357'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:21:a2:3c'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1025 09:31:42.519380 519259 main.go:141] libmachine: waiting for domain to start...
I1025 09:31:42.520675 519259 main.go:141] libmachine: domain is now running
I1025 09:31:42.520690 519259 main.go:141] libmachine: waiting for IP...
I1025 09:31:42.521467 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:42.522006 519259 main.go:141] libmachine: no network interface addresses found for domain addons-192357 (source=lease)
I1025 09:31:42.522021 519259 main.go:141] libmachine: trying to list again with source=arp
I1025 09:31:42.522320 519259 main.go:141] libmachine: unable to find current IP address of domain addons-192357 in network mk-addons-192357 (interfaces detected: [])
I1025 09:31:42.522375 519259 retry.go:31] will retry after 273.508145ms: waiting for domain to come up
I1025 09:31:42.797932 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:42.798558 519259 main.go:141] libmachine: no network interface addresses found for domain addons-192357 (source=lease)
I1025 09:31:42.798582 519259 main.go:141] libmachine: trying to list again with source=arp
I1025 09:31:42.798901 519259 main.go:141] libmachine: unable to find current IP address of domain addons-192357 in network mk-addons-192357 (interfaces detected: [])
I1025 09:31:42.798955 519259 retry.go:31] will retry after 265.083804ms: waiting for domain to come up
I1025 09:31:43.065258 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:43.065940 519259 main.go:141] libmachine: no network interface addresses found for domain addons-192357 (source=lease)
I1025 09:31:43.065957 519259 main.go:141] libmachine: trying to list again with source=arp
I1025 09:31:43.066200 519259 main.go:141] libmachine: unable to find current IP address of domain addons-192357 in network mk-addons-192357 (interfaces detected: [])
I1025 09:31:43.066235 519259 retry.go:31] will retry after 420.13406ms: waiting for domain to come up
I1025 09:31:43.487866 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:43.488452 519259 main.go:141] libmachine: no network interface addresses found for domain addons-192357 (source=lease)
I1025 09:31:43.488473 519259 main.go:141] libmachine: trying to list again with source=arp
I1025 09:31:43.488790 519259 main.go:141] libmachine: unable to find current IP address of domain addons-192357 in network mk-addons-192357 (interfaces detected: [])
I1025 09:31:43.488834 519259 retry.go:31] will retry after 447.934762ms: waiting for domain to come up
I1025 09:31:43.938477 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:43.939172 519259 main.go:141] libmachine: no network interface addresses found for domain addons-192357 (source=lease)
I1025 09:31:43.939196 519259 main.go:141] libmachine: trying to list again with source=arp
I1025 09:31:43.939521 519259 main.go:141] libmachine: unable to find current IP address of domain addons-192357 in network mk-addons-192357 (interfaces detected: [])
I1025 09:31:43.939563 519259 retry.go:31] will retry after 698.196227ms: waiting for domain to come up
I1025 09:31:44.639604 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:44.640268 519259 main.go:141] libmachine: no network interface addresses found for domain addons-192357 (source=lease)
I1025 09:31:44.640284 519259 main.go:141] libmachine: trying to list again with source=arp
I1025 09:31:44.640678 519259 main.go:141] libmachine: unable to find current IP address of domain addons-192357 in network mk-addons-192357 (interfaces detected: [])
I1025 09:31:44.640731 519259 retry.go:31] will retry after 841.530566ms: waiting for domain to come up
I1025 09:31:45.483644 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:45.484231 519259 main.go:141] libmachine: no network interface addresses found for domain addons-192357 (source=lease)
I1025 09:31:45.484245 519259 main.go:141] libmachine: trying to list again with source=arp
I1025 09:31:45.484543 519259 main.go:141] libmachine: unable to find current IP address of domain addons-192357 in network mk-addons-192357 (interfaces detected: [])
I1025 09:31:45.484576 519259 retry.go:31] will retry after 1.133634897s: waiting for domain to come up
I1025 09:31:46.620196 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:46.620784 519259 main.go:141] libmachine: no network interface addresses found for domain addons-192357 (source=lease)
I1025 09:31:46.620800 519259 main.go:141] libmachine: trying to list again with source=arp
I1025 09:31:46.621080 519259 main.go:141] libmachine: unable to find current IP address of domain addons-192357 in network mk-addons-192357 (interfaces detected: [])
I1025 09:31:46.621113 519259 retry.go:31] will retry after 1.403705003s: waiting for domain to come up
I1025 09:31:48.026618 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:48.027130 519259 main.go:141] libmachine: no network interface addresses found for domain addons-192357 (source=lease)
I1025 09:31:48.027143 519259 main.go:141] libmachine: trying to list again with source=arp
I1025 09:31:48.027479 519259 main.go:141] libmachine: unable to find current IP address of domain addons-192357 in network mk-addons-192357 (interfaces detected: [])
I1025 09:31:48.027537 519259 retry.go:31] will retry after 1.736270961s: waiting for domain to come up
I1025 09:31:49.766918 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:49.767654 519259 main.go:141] libmachine: no network interface addresses found for domain addons-192357 (source=lease)
I1025 09:31:49.767680 519259 main.go:141] libmachine: trying to list again with source=arp
I1025 09:31:49.768013 519259 main.go:141] libmachine: unable to find current IP address of domain addons-192357 in network mk-addons-192357 (interfaces detected: [])
I1025 09:31:49.768070 519259 retry.go:31] will retry after 1.513031735s: waiting for domain to come up
I1025 09:31:51.283165 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:51.283785 519259 main.go:141] libmachine: no network interface addresses found for domain addons-192357 (source=lease)
I1025 09:31:51.283811 519259 main.go:141] libmachine: trying to list again with source=arp
I1025 09:31:51.284161 519259 main.go:141] libmachine: unable to find current IP address of domain addons-192357 in network mk-addons-192357 (interfaces detected: [])
I1025 09:31:51.284214 519259 retry.go:31] will retry after 1.922759375s: waiting for domain to come up
I1025 09:31:53.209306 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:53.209957 519259 main.go:141] libmachine: no network interface addresses found for domain addons-192357 (source=lease)
I1025 09:31:53.209975 519259 main.go:141] libmachine: trying to list again with source=arp
I1025 09:31:53.210316 519259 main.go:141] libmachine: unable to find current IP address of domain addons-192357 in network mk-addons-192357 (interfaces detected: [])
I1025 09:31:53.210352 519259 retry.go:31] will retry after 2.573152095s: waiting for domain to come up
I1025 09:31:55.785461 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:55.786118 519259 main.go:141] libmachine: domain addons-192357 has current primary IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:55.786138 519259 main.go:141] libmachine: found domain IP: 192.168.39.24
I1025 09:31:55.786146 519259 main.go:141] libmachine: reserving static IP address...
I1025 09:31:55.786563 519259 main.go:141] libmachine: unable to find host DHCP lease matching {name: "addons-192357", mac: "52:54:00:f1:5b:46", ip: "192.168.39.24"} in network mk-addons-192357
I1025 09:31:55.970117 519259 main.go:141] libmachine: reserved static IP address 192.168.39.24 for domain addons-192357
I1025 09:31:55.970143 519259 main.go:141] libmachine: waiting for SSH...
I1025 09:31:55.970152 519259 main.go:141] libmachine: Getting to WaitForSSH function...
I1025 09:31:55.972975 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:55.973393 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f1:5b:46}
I1025 09:31:55.973418 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:55.973618 519259 main.go:141] libmachine: Using SSH client type: native
I1025 09:31:55.973869 519259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1025 09:31:55.973879 519259 main.go:141] libmachine: About to run SSH command:
exit 0
I1025 09:31:56.081675 519259 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1025 09:31:56.082039 519259 main.go:141] libmachine: domain creation complete
I1025 09:31:56.083724 519259 machine.go:93] provisionDockerMachine start ...
I1025 09:31:56.085951 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:56.086312 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:31:56.086342 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:56.086544 519259 main.go:141] libmachine: Using SSH client type: native
I1025 09:31:56.086755 519259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1025 09:31:56.086769 519259 main.go:141] libmachine: About to run SSH command:
hostname
I1025 09:31:56.192559 519259 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I1025 09:31:56.192589 519259 buildroot.go:166] provisioning hostname "addons-192357"
I1025 09:31:56.195660 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:56.196064 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:31:56.196096 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:56.196288 519259 main.go:141] libmachine: Using SSH client type: native
I1025 09:31:56.196485 519259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1025 09:31:56.196513 519259 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-192357 && echo "addons-192357" | sudo tee /etc/hostname
I1025 09:31:56.318950 519259 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-192357
I1025 09:31:56.321806 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:56.322136 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:31:56.322177 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:56.322343 519259 main.go:141] libmachine: Using SSH client type: native
I1025 09:31:56.322573 519259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1025 09:31:56.322596 519259 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-192357' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-192357/g' /etc/hosts;
else
echo '127.0.1.1 addons-192357' | sudo tee -a /etc/hosts;
fi
fi
I1025 09:31:56.438267 519259 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1025 09:31:56.438302 519259 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21767-514677/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-514677/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-514677/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-514677/.minikube}
I1025 09:31:56.438338 519259 buildroot.go:174] setting up certificates
I1025 09:31:56.438354 519259 provision.go:84] configureAuth start
I1025 09:31:56.440987 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:56.441370 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:31:56.441394 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:56.443535 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:56.443867 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:31:56.443888 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:56.444031 519259 provision.go:143] copyHostCerts
I1025 09:31:56.444116 519259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-514677/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-514677/.minikube/ca.pem (1078 bytes)
I1025 09:31:56.444253 519259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-514677/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-514677/.minikube/cert.pem (1123 bytes)
I1025 09:31:56.444332 519259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-514677/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-514677/.minikube/key.pem (1671 bytes)
I1025 09:31:56.444407 519259 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-514677/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-514677/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-514677/.minikube/certs/ca-key.pem org=jenkins.addons-192357 san=[127.0.0.1 192.168.39.24 addons-192357 localhost minikube]
I1025 09:31:57.147105 519259 provision.go:177] copyRemoteCerts
I1025 09:31:57.147178 519259 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1025 09:31:57.149663 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:57.150053 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:31:57.150082 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:57.150235 519259 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/id_rsa Username:docker}
I1025 09:31:57.234161 519259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-514677/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1025 09:31:57.261251 519259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-514677/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1025 09:31:57.288578 519259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-514677/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1025 09:31:57.314444 519259 provision.go:87] duration metric: took 876.075177ms to configureAuth
I1025 09:31:57.314480 519259 buildroot.go:189] setting minikube options for container-runtime
I1025 09:31:57.314669 519259 config.go:182] Loaded profile config "addons-192357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:31:57.317535 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:57.317948 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:31:57.317974 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:57.318137 519259 main.go:141] libmachine: Using SSH client type: native
I1025 09:31:57.318333 519259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1025 09:31:57.318351 519259 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1025 09:31:57.555513 519259 main.go:141] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1025 09:31:57.555543 519259 machine.go:96] duration metric: took 1.47180128s to provisionDockerMachine
I1025 09:31:57.555554 519259 client.go:171] duration metric: took 16.970294241s to LocalClient.Create
I1025 09:31:57.555571 519259 start.go:167] duration metric: took 16.970349722s to libmachine.API.Create "addons-192357"
I1025 09:31:57.555579 519259 start.go:293] postStartSetup for "addons-192357" (driver="kvm2")
I1025 09:31:57.555589 519259 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1025 09:31:57.555661 519259 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1025 09:31:57.558669 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:57.559135 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:31:57.559173 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:57.559335 519259 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/id_rsa Username:docker}
I1025 09:31:57.646319 519259 ssh_runner.go:195] Run: cat /etc/os-release
I1025 09:31:57.650824 519259 info.go:137] Remote host: Buildroot 2025.02
I1025 09:31:57.650855 519259 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-514677/.minikube/addons for local assets ...
I1025 09:31:57.650951 519259 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-514677/.minikube/files for local assets ...
I1025 09:31:57.650992 519259 start.go:296] duration metric: took 95.405266ms for postStartSetup
I1025 09:31:57.654038 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:57.654433 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:31:57.654470 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:57.654746 519259 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/config.json ...
I1025 09:31:57.654954 519259 start.go:128] duration metric: took 17.071030346s to createHost
I1025 09:31:57.657004 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:57.657343 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:31:57.657362 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:57.657527 519259 main.go:141] libmachine: Using SSH client type: native
I1025 09:31:57.657713 519259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.24 22 <nil> <nil>}
I1025 09:31:57.657724 519259 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I1025 09:31:57.767571 519259 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761384717.723949766
I1025 09:31:57.767604 519259 fix.go:216] guest clock: 1761384717.723949766
I1025 09:31:57.767615 519259 fix.go:229] Guest: 2025-10-25 09:31:57.723949766 +0000 UTC Remote: 2025-10-25 09:31:57.654968808 +0000 UTC m=+17.166671655 (delta=68.980958ms)
I1025 09:31:57.767642 519259 fix.go:200] guest clock delta is within tolerance: 68.980958ms
I1025 09:31:57.767650 519259 start.go:83] releasing machines lock for "addons-192357", held for 17.183823402s
I1025 09:31:57.770981 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:57.771377 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:31:57.771408 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:57.771998 519259 ssh_runner.go:195] Run: cat /version.json
I1025 09:31:57.772074 519259 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1025 09:31:57.775010 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:57.775216 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:57.775523 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:31:57.775561 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:57.775669 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:31:57.775698 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:57.775747 519259 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/id_rsa Username:docker}
I1025 09:31:57.775925 519259 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/id_rsa Username:docker}
I1025 09:31:57.880207 519259 ssh_runner.go:195] Run: systemctl --version
I1025 09:31:57.886197 519259 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1025 09:31:58.039200 519259 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1025 09:31:58.047153 519259 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1025 09:31:58.047241 519259 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1025 09:31:58.066738 519259 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1025 09:31:58.066787 519259 start.go:495] detecting cgroup driver to use...
I1025 09:31:58.066882 519259 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1025 09:31:58.084308 519259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1025 09:31:58.100220 519259 docker.go:218] disabling cri-docker service (if available) ...
I1025 09:31:58.100304 519259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1025 09:31:58.116984 519259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1025 09:31:58.132039 519259 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1025 09:31:58.275462 519259 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1025 09:31:58.480356 519259 docker.go:234] disabling docker service ...
I1025 09:31:58.480444 519259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1025 09:31:58.496226 519259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1025 09:31:58.509914 519259 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1025 09:31:58.674314 519259 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1025 09:31:58.829738 519259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1025 09:31:58.844911 519259 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1025 09:31:58.866120 519259 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1025 09:31:58.866196 519259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1025 09:31:58.877989 519259 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1025 09:31:58.878057 519259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1025 09:31:58.890190 519259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1025 09:31:58.901256 519259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1025 09:31:58.912264 519259 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1025 09:31:58.923956 519259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1025 09:31:58.934919 519259 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1025 09:31:58.953096 519259 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1025 09:31:58.965176 519259 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1025 09:31:58.974859 519259 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1025 09:31:58.974924 519259 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1025 09:31:58.994492 519259 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1025 09:31:59.006180 519259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1025 09:31:59.148095 519259 ssh_runner.go:195] Run: sudo systemctl restart crio
I1025 09:31:59.545428 519259 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
I1025 09:31:59.545531 519259 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1025 09:31:59.550588 519259 start.go:563] Will wait 60s for crictl version
I1025 09:31:59.550668 519259 ssh_runner.go:195] Run: which crictl
I1025 09:31:59.554551 519259 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1025 09:31:59.591385 519259 start.go:579] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1025 09:31:59.591526 519259 ssh_runner.go:195] Run: crio --version
I1025 09:31:59.619613 519259 ssh_runner.go:195] Run: crio --version
I1025 09:31:59.650179 519259 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
I1025 09:31:59.653979 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:59.654357 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:31:59.654380 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:31:59.654600 519259 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1025 09:31:59.658624 519259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1025 09:31:59.672566 519259 kubeadm.go:883] updating cluster {Name:addons-192357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-192357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1025 09:31:59.672753 519259 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1025 09:31:59.672809 519259 ssh_runner.go:195] Run: sudo crictl images --output json
I1025 09:31:59.705627 519259 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
I1025 09:31:59.705722 519259 ssh_runner.go:195] Run: which lz4
I1025 09:31:59.709605 519259 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1025 09:31:59.713950 519259 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1025 09:31:59.713973 519259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-514677/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
I1025 09:32:01.006641 519259 crio.go:462] duration metric: took 1.297084527s to copy over tarball
I1025 09:32:01.006727 519259 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1025 09:32:02.603008 519259 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.59624374s)
I1025 09:32:02.603040 519259 crio.go:469] duration metric: took 1.596363914s to extract the tarball
I1025 09:32:02.603048 519259 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1025 09:32:02.642994 519259 ssh_runner.go:195] Run: sudo crictl images --output json
I1025 09:32:02.690384 519259 crio.go:514] all images are preloaded for cri-o runtime.
I1025 09:32:02.690413 519259 cache_images.go:85] Images are preloaded, skipping loading
I1025 09:32:02.690423 519259 kubeadm.go:934] updating node { 192.168.39.24 8443 v1.34.1 crio true true} ...
I1025 09:32:02.690558 519259 kubeadm.go:946] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-192357 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.24
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:addons-192357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1025 09:32:02.690644 519259 ssh_runner.go:195] Run: crio config
I1025 09:32:02.736000 519259 cni.go:84] Creating CNI manager for ""
I1025 09:32:02.736032 519259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1025 09:32:02.736056 519259 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1025 09:32:02.736081 519259 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.24 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-192357 NodeName:addons-192357 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1025 09:32:02.736213 519259 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.24
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-192357"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.24"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.24"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1025 09:32:02.736286 519259 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1025 09:32:02.749614 519259 binaries.go:44] Found k8s binaries, skipping transfer
I1025 09:32:02.749693 519259 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1025 09:32:02.762849 519259 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I1025 09:32:02.784138 519259 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1025 09:32:02.803660 519259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
I1025 09:32:02.823425 519259 ssh_runner.go:195] Run: grep 192.168.39.24 control-plane.minikube.internal$ /etc/hosts
I1025 09:32:02.827679 519259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.24 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1025 09:32:02.842408 519259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1025 09:32:02.986943 519259 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1025 09:32:03.007438 519259 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357 for IP: 192.168.39.24
I1025 09:32:03.007471 519259 certs.go:195] generating shared ca certs ...
I1025 09:32:03.007516 519259 certs.go:227] acquiring lock for ca certs: {Name:mk744c6572a0ddfc38e69c8829b4477d27e719c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 09:32:03.007712 519259 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-514677/.minikube/ca.key
I1025 09:32:03.195115 519259 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-514677/.minikube/ca.crt ...
I1025 09:32:03.195150 519259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-514677/.minikube/ca.crt: {Name:mk090e32458846efc02ba0fce809172fb1a449c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 09:32:03.195375 519259 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-514677/.minikube/ca.key ...
I1025 09:32:03.195394 519259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-514677/.minikube/ca.key: {Name:mk996a34fd04664aeb7630980deb7dbbe94b8698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 09:32:03.195537 519259 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-514677/.minikube/proxy-client-ca.key
I1025 09:32:03.623289 519259 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-514677/.minikube/proxy-client-ca.crt ...
I1025 09:32:03.623327 519259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-514677/.minikube/proxy-client-ca.crt: {Name:mk44503f569d1522d0386c776cf6c5aa8a56b952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 09:32:03.623550 519259 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-514677/.minikube/proxy-client-ca.key ...
I1025 09:32:03.623623 519259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-514677/.minikube/proxy-client-ca.key: {Name:mk38770f1381cc651684407cb3f2bc2d9ef35e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 09:32:03.623767 519259 certs.go:257] generating profile certs ...
I1025 09:32:03.623848 519259 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/client.key
I1025 09:32:03.623881 519259 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/client.crt with IP's: []
I1025 09:32:03.928517 519259 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/client.crt ...
I1025 09:32:03.928552 519259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/client.crt: {Name:mkeda09b4ac6a6a4554f50a8095f45d94775d4c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 09:32:03.928749 519259 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/client.key ...
I1025 09:32:03.928762 519259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/client.key: {Name:mk1b30af781f3d300290a297e8dde81d5c468104 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 09:32:03.928833 519259 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/apiserver.key.e4a4a78a
I1025 09:32:03.928853 519259 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/apiserver.crt.e4a4a78a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.24]
I1025 09:32:04.185403 519259 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/apiserver.crt.e4a4a78a ...
I1025 09:32:04.185442 519259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/apiserver.crt.e4a4a78a: {Name:mk38a0280e3fb75793008956956326e4d20e7b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 09:32:04.185663 519259 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/apiserver.key.e4a4a78a ...
I1025 09:32:04.185686 519259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/apiserver.key.e4a4a78a: {Name:mk9e0825dce9ff8257e564754964847de8d16728 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 09:32:04.185815 519259 certs.go:382] copying /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/apiserver.crt.e4a4a78a -> /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/apiserver.crt
I1025 09:32:04.185956 519259 certs.go:386] copying /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/apiserver.key.e4a4a78a -> /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/apiserver.key
I1025 09:32:04.186034 519259 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/proxy-client.key
I1025 09:32:04.186064 519259 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/proxy-client.crt with IP's: []
I1025 09:32:04.493945 519259 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/proxy-client.crt ...
I1025 09:32:04.493984 519259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/proxy-client.crt: {Name:mkb6f6c0c62fa4ddbf19e7297d6f4eea0492538d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 09:32:04.494189 519259 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/proxy-client.key ...
I1025 09:32:04.494209 519259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/proxy-client.key: {Name:mkbc2f5f32f480e4f63726c0111b027f654a6ed9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 09:32:04.494437 519259 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-514677/.minikube/certs/ca-key.pem (1679 bytes)
I1025 09:32:04.494485 519259 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-514677/.minikube/certs/ca.pem (1078 bytes)
I1025 09:32:04.494538 519259 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-514677/.minikube/certs/cert.pem (1123 bytes)
I1025 09:32:04.494575 519259 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-514677/.minikube/certs/key.pem (1671 bytes)
I1025 09:32:04.495225 519259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-514677/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1025 09:32:04.527545 519259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-514677/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1025 09:32:04.557810 519259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-514677/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1025 09:32:04.591211 519259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-514677/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1025 09:32:04.625771 519259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1025 09:32:04.664040 519259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1025 09:32:04.693349 519259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1025 09:32:04.723106 519259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-514677/.minikube/profiles/addons-192357/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1025 09:32:04.751190 519259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-514677/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1025 09:32:04.779466 519259 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1025 09:32:04.799099 519259 ssh_runner.go:195] Run: openssl version
I1025 09:32:04.805283 519259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1025 09:32:04.818541 519259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1025 09:32:04.823853 519259 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:32 /usr/share/ca-certificates/minikubeCA.pem
I1025 09:32:04.823901 519259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1025 09:32:04.830846 519259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1025 09:32:04.843839 519259 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1025 09:32:04.848443 519259 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1025 09:32:04.848532 519259 kubeadm.go:400] StartCluster: {Name:addons-192357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-192357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1025 09:32:04.848606 519259 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1025 09:32:04.848661 519259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1025 09:32:04.886913 519259 cri.go:89] found id: ""
I1025 09:32:04.886991 519259 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1025 09:32:04.898902 519259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1025 09:32:04.910480 519259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1025 09:32:04.922093 519259 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1025 09:32:04.922119 519259 kubeadm.go:157] found existing configuration files:
I1025 09:32:04.922212 519259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1025 09:32:04.933013 519259 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1025 09:32:04.933073 519259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1025 09:32:04.944702 519259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1025 09:32:04.955457 519259 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1025 09:32:04.955556 519259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1025 09:32:04.966730 519259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1025 09:32:04.976943 519259 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1025 09:32:04.977025 519259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1025 09:32:04.988547 519259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1025 09:32:04.998584 519259 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1025 09:32:04.998658 519259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1025 09:32:05.010179 519259 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1025 09:32:05.056866 519259 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
I1025 09:32:05.056952 519259 kubeadm.go:318] [preflight] Running pre-flight checks
I1025 09:32:05.150240 519259 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
I1025 09:32:05.150374 519259 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1025 09:32:05.150470 519259 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1025 09:32:05.161634 519259 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1025 09:32:05.276263 519259 out.go:252] - Generating certificates and keys ...
I1025 09:32:05.276405 519259 kubeadm.go:318] [certs] Using existing ca certificate authority
I1025 09:32:05.276513 519259 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
I1025 09:32:05.450483 519259 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
I1025 09:32:05.824404 519259 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
I1025 09:32:06.537679 519259 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
I1025 09:32:06.663389 519259 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
I1025 09:32:06.932534 519259 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
I1025 09:32:06.932719 519259 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-192357 localhost] and IPs [192.168.39.24 127.0.0.1 ::1]
I1025 09:32:07.018306 519259 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
I1025 09:32:07.018457 519259 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-192357 localhost] and IPs [192.168.39.24 127.0.0.1 ::1]
I1025 09:32:07.369201 519259 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
I1025 09:32:07.684375 519259 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
I1025 09:32:07.837178 519259 kubeadm.go:318] [certs] Generating "sa" key and public key
I1025 09:32:07.837429 519259 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1025 09:32:08.160011 519259 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
I1025 09:32:08.345241 519259 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1025 09:32:08.644208 519259 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1025 09:32:09.038380 519259 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1025 09:32:09.207781 519259 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1025 09:32:09.208353 519259 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1025 09:32:09.212602 519259 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1025 09:32:09.214585 519259 out.go:252] - Booting up control plane ...
I1025 09:32:09.214722 519259 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1025 09:32:09.214849 519259 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1025 09:32:09.214957 519259 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1025 09:32:09.230264 519259 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1025 09:32:09.230396 519259 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1025 09:32:09.236464 519259 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1025 09:32:09.236772 519259 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1025 09:32:09.236846 519259 kubeadm.go:318] [kubelet-start] Starting the kubelet
I1025 09:32:09.402680 519259 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1025 09:32:09.402817 519259 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1025 09:32:10.403721 519259 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002156022s
I1025 09:32:10.406445 519259 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1025 09:32:10.406623 519259 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.24:8443/livez
I1025 09:32:10.406756 519259 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1025 09:32:10.406873 519259 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1025 09:32:13.117936 519259 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.714034794s
I1025 09:32:13.755009 519259 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.35198474s
I1025 09:32:16.165595 519259 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.762464868s
I1025 09:32:16.292256 519259 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1025 09:32:16.313563 519259 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1025 09:32:16.331016 519259 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
I1025 09:32:16.331234 519259 kubeadm.go:318] [mark-control-plane] Marking the node addons-192357 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1025 09:32:16.343794 519259 kubeadm.go:318] [bootstrap-token] Using token: pz41c3.fp0mk5afb4sg7h0x
I1025 09:32:16.345027 519259 out.go:252] - Configuring RBAC rules ...
I1025 09:32:16.345149 519259 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1025 09:32:16.353333 519259 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1025 09:32:16.363483 519259 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1025 09:32:16.368251 519259 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1025 09:32:16.372210 519259 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1025 09:32:16.376010 519259 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1025 09:32:16.573975 519259 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1025 09:32:16.999042 519259 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
I1025 09:32:17.570882 519259 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
I1025 09:32:17.573458 519259 kubeadm.go:318]
I1025 09:32:17.573556 519259 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
I1025 09:32:17.573567 519259 kubeadm.go:318]
I1025 09:32:17.573627 519259 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
I1025 09:32:17.573635 519259 kubeadm.go:318]
I1025 09:32:17.573683 519259 kubeadm.go:318] mkdir -p $HOME/.kube
I1025 09:32:17.573778 519259 kubeadm.go:318] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1025 09:32:17.573853 519259 kubeadm.go:318] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1025 09:32:17.573868 519259 kubeadm.go:318]
I1025 09:32:17.573917 519259 kubeadm.go:318] Alternatively, if you are the root user, you can run:
I1025 09:32:17.573924 519259 kubeadm.go:318]
I1025 09:32:17.573961 519259 kubeadm.go:318] export KUBECONFIG=/etc/kubernetes/admin.conf
I1025 09:32:17.573967 519259 kubeadm.go:318]
I1025 09:32:17.574009 519259 kubeadm.go:318] You should now deploy a pod network to the cluster.
I1025 09:32:17.574071 519259 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1025 09:32:17.574128 519259 kubeadm.go:318] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1025 09:32:17.574134 519259 kubeadm.go:318]
I1025 09:32:17.574247 519259 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
I1025 09:32:17.574382 519259 kubeadm.go:318] and service account keys on each node and then running the following as root:
I1025 09:32:17.574410 519259 kubeadm.go:318]
I1025 09:32:17.574550 519259 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token pz41c3.fp0mk5afb4sg7h0x \
I1025 09:32:17.574686 519259 kubeadm.go:318] --discovery-token-ca-cert-hash sha256:e66fc605c3209ee4998bf77b5286bbb4f1b221359ebd6845f71207fb698dcff0 \
I1025 09:32:17.574731 519259 kubeadm.go:318] --control-plane
I1025 09:32:17.574741 519259 kubeadm.go:318]
I1025 09:32:17.574872 519259 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
I1025 09:32:17.574895 519259 kubeadm.go:318]
I1025 09:32:17.575018 519259 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token pz41c3.fp0mk5afb4sg7h0x \
I1025 09:32:17.575119 519259 kubeadm.go:318] --discovery-token-ca-cert-hash sha256:e66fc605c3209ee4998bf77b5286bbb4f1b221359ebd6845f71207fb698dcff0
I1025 09:32:17.575231 519259 kubeadm.go:318] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1025 09:32:17.575243 519259 cni.go:84] Creating CNI manager for ""
I1025 09:32:17.575251 519259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1025 09:32:17.576812 519259 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1025 09:32:17.577858 519259 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1025 09:32:17.590724 519259 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1025 09:32:17.615812 519259 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1025 09:32:17.615951 519259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1025 09:32:17.616011 519259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-192357 minikube.k8s.io/updated_at=2025_10_25T09_32_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689 minikube.k8s.io/name=addons-192357 minikube.k8s.io/primary=true
I1025 09:32:17.653990 519259 ops.go:34] apiserver oom_adj: -16
I1025 09:32:17.752629 519259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1025 09:32:18.253618 519259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1025 09:32:18.753467 519259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1025 09:32:19.253648 519259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1025 09:32:19.753218 519259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1025 09:32:20.253611 519259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1025 09:32:20.753339 519259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1025 09:32:21.253286 519259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1025 09:32:21.323452 519259 kubeadm.go:1113] duration metric: took 3.707587124s to wait for elevateKubeSystemPrivileges
I1025 09:32:21.323538 519259 kubeadm.go:402] duration metric: took 16.475012363s to StartCluster
I1025 09:32:21.323571 519259 settings.go:142] acquiring lock: {Name:mkbb203de1a8b109927f08b15e40850dc8dbf040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 09:32:21.323720 519259 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21767-514677/kubeconfig
I1025 09:32:21.324165 519259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-514677/kubeconfig: {Name:mk04b0e9f8eda875203060f5ebaad7f1a4345939 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 09:32:21.324397 519259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1025 09:32:21.324441 519259 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1025 09:32:21.324531 519259 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1025 09:32:21.324691 519259 addons.go:69] Setting yakd=true in profile "addons-192357"
I1025 09:32:21.324703 519259 config.go:182] Loaded profile config "addons-192357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:32:21.324719 519259 addons.go:238] Setting addon yakd=true in "addons-192357"
I1025 09:32:21.324717 519259 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-192357"
I1025 09:32:21.324738 519259 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-192357"
I1025 09:32:21.324702 519259 addons.go:69] Setting inspektor-gadget=true in profile "addons-192357"
I1025 09:32:21.324742 519259 addons.go:69] Setting registry-creds=true in profile "addons-192357"
I1025 09:32:21.324758 519259 addons.go:238] Setting addon inspektor-gadget=true in "addons-192357"
I1025 09:32:21.324762 519259 addons.go:69] Setting storage-provisioner=true in profile "addons-192357"
I1025 09:32:21.324771 519259 addons.go:238] Setting addon registry-creds=true in "addons-192357"
I1025 09:32:21.324777 519259 host.go:66] Checking if "addons-192357" exists ...
I1025 09:32:21.324784 519259 host.go:66] Checking if "addons-192357" exists ...
I1025 09:32:21.324795 519259 addons.go:69] Setting volumesnapshots=true in profile "addons-192357"
I1025 09:32:21.324808 519259 addons.go:238] Setting addon volumesnapshots=true in "addons-192357"
I1025 09:32:21.324812 519259 host.go:66] Checking if "addons-192357" exists ...
I1025 09:32:21.324830 519259 host.go:66] Checking if "addons-192357" exists ...
I1025 09:32:21.324754 519259 host.go:66] Checking if "addons-192357" exists ...
I1025 09:32:21.324934 519259 addons.go:69] Setting cloud-spanner=true in profile "addons-192357"
I1025 09:32:21.324949 519259 addons.go:238] Setting addon cloud-spanner=true in "addons-192357"
I1025 09:32:21.324983 519259 host.go:66] Checking if "addons-192357" exists ...
I1025 09:32:21.325016 519259 addons.go:69] Setting ingress=true in profile "addons-192357"
I1025 09:32:21.325044 519259 addons.go:238] Setting addon ingress=true in "addons-192357"
I1025 09:32:21.325084 519259 host.go:66] Checking if "addons-192357" exists ...
I1025 09:32:21.325158 519259 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-192357"
I1025 09:32:21.325211 519259 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-192357"
I1025 09:32:21.325246 519259 host.go:66] Checking if "addons-192357" exists ...
I1025 09:32:21.324779 519259 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-192357"
I1025 09:32:21.325700 519259 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-192357"
I1025 09:32:21.325906 519259 addons.go:69] Setting gcp-auth=true in profile "addons-192357"
I1025 09:32:21.325943 519259 mustload.go:65] Loading cluster: addons-192357
I1025 09:32:21.324787 519259 addons.go:69] Setting volcano=true in profile "addons-192357"
I1025 09:32:21.326033 519259 addons.go:69] Setting ingress-dns=true in profile "addons-192357"
I1025 09:32:21.326034 519259 addons.go:238] Setting addon volcano=true in "addons-192357"
I1025 09:32:21.326066 519259 addons.go:238] Setting addon ingress-dns=true in "addons-192357"
I1025 09:32:21.326073 519259 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-192357"
I1025 09:32:21.326091 519259 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-192357"
I1025 09:32:21.326098 519259 host.go:66] Checking if "addons-192357" exists ...
I1025 09:32:21.326119 519259 host.go:66] Checking if "addons-192357" exists ...
I1025 09:32:21.326131 519259 host.go:66] Checking if "addons-192357" exists ...
I1025 09:32:21.326138 519259 config.go:182] Loaded profile config "addons-192357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:32:21.324771 519259 addons.go:238] Setting addon storage-provisioner=true in "addons-192357"
I1025 09:32:21.326749 519259 host.go:66] Checking if "addons-192357" exists ...
I1025 09:32:21.326776 519259 addons.go:69] Setting registry=true in profile "addons-192357"
I1025 09:32:21.326799 519259 addons.go:238] Setting addon registry=true in "addons-192357"
I1025 09:32:21.326828 519259 host.go:66] Checking if "addons-192357" exists ...
I1025 09:32:21.327347 519259 addons.go:69] Setting metrics-server=true in profile "addons-192357"
I1025 09:32:21.327372 519259 addons.go:238] Setting addon metrics-server=true in "addons-192357"
I1025 09:32:21.324907 519259 addons.go:69] Setting default-storageclass=true in profile "addons-192357"
I1025 09:32:21.327407 519259 host.go:66] Checking if "addons-192357" exists ...
I1025 09:32:21.327414 519259 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-192357"
I1025 09:32:21.328034 519259 out.go:179] * Verifying Kubernetes components...
I1025 09:32:21.329357 519259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1025 09:32:21.331198 519259 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1025 09:32:21.331207 519259 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
I1025 09:32:21.331199 519259 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1025 09:32:21.332213 519259 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
I1025 09:32:21.332230 519259 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1025 09:32:21.332279 519259 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1025 09:32:21.332294 519259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1025 09:32:21.332287 519259 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
I1025 09:32:21.332238 519259 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
I1025 09:32:21.332218 519259 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1025 09:32:21.332382 519259 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1025 09:32:21.333424 519259 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
I1025 09:32:21.333440 519259 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1025 09:32:21.334201 519259 host.go:66] Checking if "addons-192357" exists ...
W1025 09:32:21.335336 519259 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1025 09:32:21.336014 519259 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-192357"
I1025 09:32:21.336073 519259 host.go:66] Checking if "addons-192357" exists ...
I1025 09:32:21.336871 519259 addons.go:238] Setting addon default-storageclass=true in "addons-192357"
I1025 09:32:21.336904 519259 host.go:66] Checking if "addons-192357" exists ...
I1025 09:32:21.337107 519259 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
I1025 09:32:21.337140 519259 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1025 09:32:21.337150 519259 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1025 09:32:21.337189 519259 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
I1025 09:32:21.338005 519259 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
I1025 09:32:21.338016 519259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1025 09:32:21.338039 519259 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1025 09:32:21.337960 519259 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1025 09:32:21.337976 519259 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1025 09:32:21.338811 519259 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1025 09:32:21.339211 519259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1025 09:32:21.337960 519259 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1025 09:32:21.339556 519259 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1025 09:32:21.339574 519259 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1025 09:32:21.339619 519259 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1025 09:32:21.339638 519259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1025 09:32:21.340195 519259 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
I1025 09:32:21.340214 519259 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1025 09:32:21.340241 519259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1025 09:32:21.340216 519259 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1025 09:32:21.340275 519259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1025 09:32:21.340877 519259 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1025 09:32:21.340902 519259 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1025 09:32:21.341520 519259 out.go:179] - Using image docker.io/registry:3.0.0
I1025 09:32:21.342131 519259 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I1025 09:32:21.342146 519259 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1025 09:32:21.342295 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.342875 519259 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
I1025 09:32:21.342907 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.342907 519259 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
I1025 09:32:21.343385 519259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1025 09:32:21.343616 519259 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1025 09:32:21.343655 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.344170 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:32:21.344211 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.344328 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.344382 519259 out.go:179] - Using image docker.io/busybox:stable
I1025 09:32:21.344668 519259 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1025 09:32:21.344734 519259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1025 09:32:21.344667 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:32:21.344801 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.344979 519259 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/id_rsa Username:docker}
I1025 09:32:21.345813 519259 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/id_rsa Username:docker}
I1025 09:32:21.345882 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:32:21.345953 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.345962 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:32:21.345990 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.346013 519259 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1025 09:32:21.346027 519259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1025 09:32:21.346033 519259 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1025 09:32:21.347055 519259 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1025 09:32:21.347077 519259 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/id_rsa Username:docker}
I1025 09:32:21.347086 519259 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/id_rsa Username:docker}
I1025 09:32:21.348904 519259 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1025 09:32:21.349941 519259 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1025 09:32:21.350156 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.351212 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.351300 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:32:21.351337 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.351600 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.351689 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.351951 519259 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/id_rsa Username:docker}
I1025 09:32:21.352185 519259 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1025 09:32:21.352415 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:32:21.352458 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.352561 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.352754 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:32:21.352786 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.352790 519259 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/id_rsa Username:docker}
I1025 09:32:21.352857 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.353002 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:32:21.353037 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.353268 519259 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1025 09:32:21.353286 519259 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1025 09:32:21.353484 519259 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/id_rsa Username:docker}
I1025 09:32:21.353675 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.353752 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:32:21.353764 519259 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/id_rsa Username:docker}
I1025 09:32:21.353780 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.354107 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:32:21.354143 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.354245 519259 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/id_rsa Username:docker}
I1025 09:32:21.354541 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.354644 519259 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/id_rsa Username:docker}
I1025 09:32:21.354818 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:32:21.354851 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.355143 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:32:21.355172 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.355199 519259 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/id_rsa Username:docker}
I1025 09:32:21.355285 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.355388 519259 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/id_rsa Username:docker}
I1025 09:32:21.355779 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.356169 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:32:21.356217 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.356279 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:32:21.356316 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.356598 519259 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/id_rsa Username:docker}
I1025 09:32:21.356757 519259 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/id_rsa Username:docker}
I1025 09:32:21.357673 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.358027 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:32:21.358048 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:21.358187 519259 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/id_rsa Username:docker}
W1025 09:32:21.533984 519259 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:32790->192.168.39.24:22: read: connection reset by peer
I1025 09:32:21.534020 519259 retry.go:31] will retry after 258.403917ms: ssh: handshake failed: read tcp 192.168.39.1:32790->192.168.39.24:22: read: connection reset by peer
W1025 09:32:21.534081 519259 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:32806->192.168.39.24:22: read: connection reset by peer
I1025 09:32:21.534089 519259 retry.go:31] will retry after 266.719487ms: ssh: handshake failed: read tcp 192.168.39.1:32806->192.168.39.24:22: read: connection reset by peer
W1025 09:32:21.534137 519259 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:32822->192.168.39.24:22: read: connection reset by peer
I1025 09:32:21.534145 519259 retry.go:31] will retry after 283.801118ms: ssh: handshake failed: read tcp 192.168.39.1:32822->192.168.39.24:22: read: connection reset by peer
I1025 09:32:21.668044 519259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1025 09:32:21.668064 519259 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1025 09:32:21.694269 519259 node_ready.go:35] waiting up to 6m0s for node "addons-192357" to be "Ready" ...
I1025 09:32:21.697961 519259 node_ready.go:49] node "addons-192357" is "Ready"
I1025 09:32:21.697992 519259 node_ready.go:38] duration metric: took 3.689516ms for node "addons-192357" to be "Ready" ...
I1025 09:32:21.698006 519259 api_server.go:52] waiting for apiserver process to appear ...
I1025 09:32:21.698047 519259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 09:32:21.815713 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1025 09:32:21.932112 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1025 09:32:21.944868 519259 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1025 09:32:21.944898 519259 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1025 09:32:21.976705 519259 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1025 09:32:21.976726 519259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1025 09:32:21.989582 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1025 09:32:22.041638 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1025 09:32:22.051328 519259 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
I1025 09:32:22.051355 519259 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1025 09:32:22.091169 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1025 09:32:22.095600 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1025 09:32:22.099926 519259 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
I1025 09:32:22.099954 519259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1025 09:32:22.225331 519259 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1025 09:32:22.225355 519259 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1025 09:32:22.256413 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1025 09:32:22.257118 519259 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1025 09:32:22.257137 519259 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1025 09:32:22.297550 519259 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1025 09:32:22.297580 519259 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1025 09:32:22.385079 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 09:32:22.446256 519259 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
I1025 09:32:22.446284 519259 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1025 09:32:22.462706 519259 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
I1025 09:32:22.462732 519259 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1025 09:32:22.530048 519259 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
I1025 09:32:22.530071 519259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1025 09:32:22.629257 519259 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1025 09:32:22.629299 519259 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1025 09:32:22.674319 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1025 09:32:22.681289 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1025 09:32:22.704119 519259 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1025 09:32:22.704165 519259 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1025 09:32:22.772708 519259 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1025 09:32:22.772739 519259 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1025 09:32:22.809938 519259 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
I1025 09:32:22.809966 519259 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1025 09:32:22.900210 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1025 09:32:23.074661 519259 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1025 09:32:23.074704 519259 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1025 09:32:23.167815 519259 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1025 09:32:23.167848 519259 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1025 09:32:23.230620 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1025 09:32:23.465739 519259 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
I1025 09:32:23.465774 519259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1025 09:32:23.683064 519259 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1025 09:32:23.683108 519259 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1025 09:32:23.782016 519259 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1025 09:32:23.782048 519259 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1025 09:32:24.015615 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1025 09:32:24.235327 519259 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1025 09:32:24.235367 519259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1025 09:32:24.236331 519259 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1025 09:32:24.236348 519259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1025 09:32:24.603345 519259 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1025 09:32:24.603383 519259 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1025 09:32:24.603392 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1025 09:32:24.724140 519259 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.056049347s)
I1025 09:32:24.724169 519259 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.026102366s)
I1025 09:32:24.724200 519259 api_server.go:72] duration metric: took 3.399720726s to wait for apiserver process to appear ...
I1025 09:32:24.724191 519259 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1025 09:32:24.724209 519259 api_server.go:88] waiting for apiserver healthz status ...
I1025 09:32:24.724239 519259 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
I1025 09:32:24.769747 519259 api_server.go:279] https://192.168.39.24:8443/healthz returned 200:
ok
I1025 09:32:24.772790 519259 api_server.go:141] control plane version: v1.34.1
I1025 09:32:24.772819 519259 api_server.go:131] duration metric: took 48.601461ms to wait for apiserver health ...
I1025 09:32:24.772829 519259 system_pods.go:43] waiting for kube-system pods to appear ...
I1025 09:32:24.789307 519259 system_pods.go:59] 10 kube-system pods found
I1025 09:32:24.789351 519259 system_pods.go:61] "amd-gpu-device-plugin-bmt9x" [d0c0d8a6-5a55-4cda-9e4c-e27d5682c2ce] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1025 09:32:24.789362 519259 system_pods.go:61] "coredns-66bc5c9577-78npv" [0b2eb535-b152-4564-bdb2-ab7693d6c4ca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1025 09:32:24.789375 519259 system_pods.go:61] "coredns-66bc5c9577-x545g" [f49730b0-d6bc-4d5c-9112-7af8c91001b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1025 09:32:24.789392 519259 system_pods.go:61] "etcd-addons-192357" [ca23edb9-cd46-40b7-831a-e7ffdaafc269] Running
I1025 09:32:24.789403 519259 system_pods.go:61] "kube-apiserver-addons-192357" [d5ecad18-8a95-47c8-9031-9079024b1284] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1025 09:32:24.789415 519259 system_pods.go:61] "kube-controller-manager-addons-192357" [206d7e59-eb5b-4700-81f7-0359330bc556] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1025 09:32:24.789423 519259 system_pods.go:61] "kube-proxy-t7dv4" [f132dc9b-7f9a-4f3b-8e83-fd7e779f86c5] Running
I1025 09:32:24.789430 519259 system_pods.go:61] "kube-scheduler-addons-192357" [c09d0484-2bd3-4767-b8e8-8992c92d6866] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1025 09:32:24.789440 519259 system_pods.go:61] "nvidia-device-plugin-daemonset-wk7jc" [57c0a152-4a08-45d8-ab2a-dd0000ae9680] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1025 09:32:24.789450 519259 system_pods.go:61] "registry-creds-764b6fb674-x95zq" [7d49f6c8-d198-4bd8-890e-88cd2cd19e56] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1025 09:32:24.789462 519259 system_pods.go:74] duration metric: took 16.623614ms to wait for pod list to return data ...
I1025 09:32:24.789477 519259 default_sa.go:34] waiting for default service account to be created ...
I1025 09:32:24.834478 519259 default_sa.go:45] found service account: "default"
I1025 09:32:24.834541 519259 default_sa.go:55] duration metric: took 45.052158ms for default service account to be created ...
I1025 09:32:24.834558 519259 system_pods.go:116] waiting for k8s-apps to be running ...
I1025 09:32:24.881624 519259 system_pods.go:86] 10 kube-system pods found
I1025 09:32:24.881678 519259 system_pods.go:89] "amd-gpu-device-plugin-bmt9x" [d0c0d8a6-5a55-4cda-9e4c-e27d5682c2ce] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1025 09:32:24.881691 519259 system_pods.go:89] "coredns-66bc5c9577-78npv" [0b2eb535-b152-4564-bdb2-ab7693d6c4ca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1025 09:32:24.881705 519259 system_pods.go:89] "coredns-66bc5c9577-x545g" [f49730b0-d6bc-4d5c-9112-7af8c91001b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1025 09:32:24.881714 519259 system_pods.go:89] "etcd-addons-192357" [ca23edb9-cd46-40b7-831a-e7ffdaafc269] Running
I1025 09:32:24.881725 519259 system_pods.go:89] "kube-apiserver-addons-192357" [d5ecad18-8a95-47c8-9031-9079024b1284] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1025 09:32:24.881740 519259 system_pods.go:89] "kube-controller-manager-addons-192357" [206d7e59-eb5b-4700-81f7-0359330bc556] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1025 09:32:24.881751 519259 system_pods.go:89] "kube-proxy-t7dv4" [f132dc9b-7f9a-4f3b-8e83-fd7e779f86c5] Running
I1025 09:32:24.881761 519259 system_pods.go:89] "kube-scheduler-addons-192357" [c09d0484-2bd3-4767-b8e8-8992c92d6866] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1025 09:32:24.881772 519259 system_pods.go:89] "nvidia-device-plugin-daemonset-wk7jc" [57c0a152-4a08-45d8-ab2a-dd0000ae9680] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1025 09:32:24.881785 519259 system_pods.go:89] "registry-creds-764b6fb674-x95zq" [7d49f6c8-d198-4bd8-890e-88cd2cd19e56] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1025 09:32:24.881802 519259 system_pods.go:126] duration metric: took 47.233365ms to wait for k8s-apps to be running ...
I1025 09:32:24.881817 519259 system_svc.go:44] waiting for kubelet service to be running ....
I1025 09:32:24.881892 519259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1025 09:32:25.152614 519259 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1025 09:32:25.152648 519259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1025 09:32:25.230882 519259 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-192357" context rescaled to 1 replicas
I1025 09:32:25.382406 519259 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1025 09:32:25.382437 519259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1025 09:32:25.682804 519259 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1025 09:32:25.682840 519259 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1025 09:32:25.909603 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1025 09:32:27.123581 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.307823874s)
I1025 09:32:27.123662 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.191520317s)
I1025 09:32:27.125943 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.084274207s)
I1025 09:32:27.126035 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.034827189s)
I1025 09:32:27.126059 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.03043838s)
I1025 09:32:27.126116 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.869678302s)
I1025 09:32:27.126309 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.136703103s)
I1025 09:32:27.510439 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.125321675s)
W1025 09:32:27.510482 519259 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget created
serviceaccount/gadget created
configmap/gadget created
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
role.rbac.authorization.k8s.io/gadget-role created
rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
daemonset.apps/gadget created
stderr:
Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:32:27.510525 519259 retry.go:31] will retry after 340.603901ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget created
serviceaccount/gadget created
configmap/gadget created
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
role.rbac.authorization.k8s.io/gadget-role created
rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
daemonset.apps/gadget created
stderr:
Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:32:27.851622 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 09:32:28.754080 519259 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1025 09:32:28.756971 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:28.757411 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:32:28.757436 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:28.757599 519259 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/id_rsa Username:docker}
I1025 09:32:29.037366 519259 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1025 09:32:29.237206 519259 addons.go:238] Setting addon gcp-auth=true in "addons-192357"
I1025 09:32:29.237268 519259 host.go:66] Checking if "addons-192357" exists ...
I1025 09:32:29.238904 519259 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1025 09:32:29.241183 519259 main.go:141] libmachine: domain addons-192357 has defined MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:29.241581 519259 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:5b:46", ip: ""} in network mk-addons-192357: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:55 +0000 UTC Type:0 Mac:52:54:00:f1:5b:46 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-192357 Clientid:01:52:54:00:f1:5b:46}
I1025 09:32:29.241612 519259 main.go:141] libmachine: domain addons-192357 has defined IP address 192.168.39.24 and MAC address 52:54:00:f1:5b:46 in network mk-addons-192357
I1025 09:32:29.241764 519259 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-514677/.minikube/machines/addons-192357/id_rsa Username:docker}
I1025 09:32:29.361259 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.686896436s)
I1025 09:32:29.361349 519259 addons.go:479] Verifying addon ingress=true in "addons-192357"
I1025 09:32:29.361374 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.680049384s)
I1025 09:32:29.361419 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.461170222s)
I1025 09:32:29.361542 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.130881594s)
I1025 09:32:29.361571 519259 addons.go:479] Verifying addon metrics-server=true in "addons-192357"
I1025 09:32:29.361596 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.345945152s)
I1025 09:32:29.361445 519259 addons.go:479] Verifying addon registry=true in "addons-192357"
I1025 09:32:29.362719 519259 out.go:179] * Verifying registry addon...
I1025 09:32:29.362742 519259 out.go:179] * Verifying ingress addon...
I1025 09:32:29.362719 519259 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-192357 service yakd-dashboard -n yakd-dashboard
I1025 09:32:29.364856 519259 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1025 09:32:29.364931 519259 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1025 09:32:29.439664 519259 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1025 09:32:29.439679 519259 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1025 09:32:29.439686 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:29.439692 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:29.884396 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:29.886344 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:30.050872 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.447427854s)
W1025 09:32:30.050949 519259 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1025 09:32:30.050891 519259 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.168966586s)
I1025 09:32:30.051002 519259 retry.go:31] will retry after 230.146353ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1025 09:32:30.051027 519259 system_svc.go:56] duration metric: took 5.169203553s WaitForService to wait for kubelet
I1025 09:32:30.051044 519259 kubeadm.go:586] duration metric: took 8.726563407s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1025 09:32:30.051072 519259 node_conditions.go:102] verifying NodePressure condition ...
I1025 09:32:30.067429 519259 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1025 09:32:30.067464 519259 node_conditions.go:123] node cpu capacity is 2
I1025 09:32:30.067483 519259 node_conditions.go:105] duration metric: took 16.403007ms to run NodePressure ...
I1025 09:32:30.067513 519259 start.go:241] waiting for startup goroutines ...
I1025 09:32:30.281307 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1025 09:32:30.372624 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:30.372774 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:30.888374 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:30.900745 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:30.939887 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.030230434s)
I1025 09:32:30.939937 519259 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-192357"
I1025 09:32:30.941058 519259 out.go:179] * Verifying csi-hostpath-driver addon...
I1025 09:32:30.942968 519259 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1025 09:32:30.964029 519259 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1025 09:32:30.964060 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:31.379690 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:31.382450 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:31.430236 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.578562137s)
I1025 09:32:31.430286 519259 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.191350667s)
W1025 09:32:31.430296 519259 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:32:31.430324 519259 retry.go:31] will retry after 250.668588ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:32:31.431720 519259 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
I1025 09:32:31.432872 519259 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1025 09:32:31.433882 519259 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1025 09:32:31.433895 519259 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1025 09:32:31.480381 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:31.491143 519259 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1025 09:32:31.491164 519259 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1025 09:32:31.541399 519259 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1025 09:32:31.541436 519259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1025 09:32:31.615793 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1025 09:32:31.681846 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 09:32:31.870864 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:31.871490 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:31.971618 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:32.369202 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:32.369929 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:32.430761 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.149398684s)
I1025 09:32:32.449770 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:32.888368 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:32.889695 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:33.002386 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:33.050061 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.434215998s)
I1025 09:32:33.051131 519259 addons.go:479] Verifying addon gcp-auth=true in "addons-192357"
I1025 09:32:33.052610 519259 out.go:179] * Verifying gcp-auth addon...
I1025 09:32:33.054415 519259 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1025 09:32:33.066661 519259 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1025 09:32:33.066678 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:33.372856 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:33.373246 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:33.474347 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:33.573625 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:33.870411 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:33.870831 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:33.935055 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.253163569s)
W1025 09:32:33.935094 519259 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:32:33.935116 519259 retry.go:31] will retry after 594.077155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:32:33.972799 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:34.072145 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:34.368860 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:34.369798 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:34.469522 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:34.529455 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 09:32:34.559009 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:34.869982 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:34.870382 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:34.946925 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:35.058629 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
W1025 09:32:35.200668 519259 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:32:35.200707 519259 retry.go:31] will retry after 1.264563455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:32:35.369436 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:35.369593 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:35.447060 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:35.558377 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:35.870265 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:35.871353 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:35.945925 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:36.058092 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:36.369077 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:36.369161 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:36.445877 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:36.465829 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 09:32:36.558437 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:36.870037 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:36.870701 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:36.949029 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:37.058171 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:37.373629 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:37.373800 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:37.470706 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:37.540220 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.074354229s)
W1025 09:32:37.540273 519259 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:32:37.540298 519259 retry.go:31] will retry after 1.091949978s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:32:37.559993 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:37.870987 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:37.872265 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:37.949388 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:38.058603 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:38.370967 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:38.371349 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:38.450358 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:38.559203 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:38.633363 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 09:32:38.870079 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:38.870083 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:38.948964 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:39.059327 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:39.371141 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:39.374934 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:39.449591 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:39.560572 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
W1025 09:32:39.565279 519259 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:32:39.565306 519259 retry.go:31] will retry after 2.7450603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:32:39.872083 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:39.874035 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:39.947204 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:40.058920 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:40.373749 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:40.373965 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:40.449228 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:40.557829 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:40.932064 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:40.933572 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:40.947976 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:41.059807 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:41.371794 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:41.372312 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:41.446732 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:41.557910 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:41.869619 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:41.869864 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:41.950413 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:42.059758 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:42.311135 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 09:32:42.376706 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:42.376907 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:42.452034 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:42.558609 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:42.870694 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:42.871430 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:42.946224 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:43.060135 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
W1025 09:32:43.227950 519259 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:32:43.228003 519259 retry.go:31] will retry after 3.219612579s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:32:43.373530 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:43.374217 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:43.449110 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:43.560175 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:43.869556 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:43.871109 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:43.947341 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:44.058291 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:44.370792 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:44.372240 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:44.450008 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:44.559925 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:44.877405 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:44.877471 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:44.947707 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:45.058101 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:45.369640 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:45.369666 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:45.449077 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:45.558031 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:45.869271 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:45.869603 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:45.949331 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:46.060152 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:46.369909 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:46.371205 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:46.447139 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:46.448015 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 09:32:46.559945 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:46.871466 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:46.871512 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:46.947068 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:47.061342 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:47.982891 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:47.982980 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:47.983771 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:47.984005 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:47.987815 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:47.989004 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:47.994869 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:48.058814 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:48.146530 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.698448708s)
W1025 09:32:48.146596 519259 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:32:48.146624 519259 retry.go:31] will retry after 2.225749828s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:32:48.370776 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:48.371314 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:48.447353 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:48.558446 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:48.873379 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:48.873584 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:48.946865 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:49.059189 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:49.369884 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:49.370607 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:49.446862 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:49.557987 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:49.868466 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:49.869856 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:49.946775 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:50.059051 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:50.369010 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:50.369029 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:50.373126 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 09:32:50.448352 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:50.559121 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:50.869088 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:50.869849 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:50.947155 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
W1025 09:32:51.010167 519259 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:32:51.010218 519259 retry.go:31] will retry after 3.436933907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:32:51.058430 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:51.369679 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:51.370444 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:51.446178 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:51.558101 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:51.868267 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:51.869358 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:51.946771 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:52.059031 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:52.368836 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:52.369762 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:52.451451 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:52.558854 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:52.869196 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:52.870948 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:52.949697 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:53.059931 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:53.368606 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:53.371873 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:53.448764 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:53.557233 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:53.872943 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:53.872964 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:53.949553 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:54.062643 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:54.370307 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:54.370488 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:54.447841 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 09:32:54.451394 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:54.562352 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:54.871210 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:54.875124 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:54.947420 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:55.058128 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:55.370871 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:55.372923 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:55.450014 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:55.562461 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:55.628797 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.180911194s)
W1025 09:32:55.628836 519259 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:32:55.628858 519259 retry.go:31] will retry after 11.683779017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:32:55.881539 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:55.881539 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:55.946775 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:56.058414 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:56.368992 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:56.369806 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:56.447886 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:56.557900 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:56.869268 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:56.870127 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:56.971921 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:57.070158 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:57.368808 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:57.370291 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:57.447251 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:57.558061 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:57.868815 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:57.869234 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:57.946085 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:58.058936 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:58.368846 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:58.368866 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:58.447218 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:58.557993 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:58.868353 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:58.868420 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:58.946517 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:59.058859 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:59.371382 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:59.377701 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:59.447196 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:32:59.558701 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:32:59.871924 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:32:59.872434 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:32:59.946261 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:00.060083 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:00.371303 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:00.374563 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:00.448271 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:00.557849 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:00.872218 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:00.874014 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:00.948021 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:01.067625 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:01.371396 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:01.373902 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:01.447738 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:01.557732 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:01.869424 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:01.870009 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:01.946029 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:02.064161 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:02.369688 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:02.370055 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:02.448288 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:02.563493 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:02.872471 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:02.874291 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:02.947628 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:03.059673 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:03.372927 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:03.373831 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:03.447448 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:03.558736 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:03.869726 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:03.871155 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:03.947284 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:04.058398 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:04.383536 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:04.383675 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:04.449166 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:04.558561 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:04.872657 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:04.873096 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:04.948221 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:05.062171 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:05.370038 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:05.373472 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:05.447601 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:05.559706 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:05.870042 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:05.870407 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:05.950053 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:06.057756 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:06.373265 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:06.374883 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:06.452183 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:06.557947 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:06.869634 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:06.869924 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:06.969796 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:07.057404 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:07.313669 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 09:33:07.374993 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:07.376359 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:07.450330 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:07.562741 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:07.869890 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:07.872178 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:07.949662 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:08.060883 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
W1025 09:33:08.103947 519259 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:33:08.103999 519259 retry.go:31] will retry after 19.643291837s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:33:08.369280 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:08.369639 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:08.446394 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:08.558688 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:08.867842 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:08.868518 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:08.946808 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:09.057647 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:09.369116 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:09.369172 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:09.446126 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:09.559326 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:09.868828 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:09.870537 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:09.948422 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:10.059896 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:10.371367 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:10.371571 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:10.446707 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:10.558325 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:10.869602 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:10.869599 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:10.946737 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:11.057572 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:11.367975 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:11.369608 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:11.446398 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:11.558346 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:11.870589 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:11.872388 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:11.946072 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:12.057779 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:12.369081 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:12.369099 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:12.446609 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:12.557690 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:12.869352 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:12.869937 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:12.946742 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:13.057653 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:13.371184 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:13.371449 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:13.446789 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:13.558886 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:13.871777 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:13.871925 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:13.950596 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:14.058580 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:14.371101 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:14.371650 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:14.447444 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:14.559288 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:14.869754 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:14.870550 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:14.948282 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:15.058250 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:15.369353 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1025 09:33:15.369453 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:15.447186 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:15.559083 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:15.872387 519259 kapi.go:107] duration metric: took 46.507448858s to wait for kubernetes.io/minikube-addons=registry ...
I1025 09:33:15.873045 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:15.947634 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:16.058901 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:16.371252 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:16.448405 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:16.560686 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:17.030206 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:17.032344 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:17.059338 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:17.369816 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:17.447096 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:17.558356 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:17.871918 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:17.947909 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:18.058203 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:18.369444 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:18.479362 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:18.561611 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:18.870520 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:18.947926 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:19.060118 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:19.368578 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:19.447051 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:19.558639 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:19.870001 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:19.947018 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:20.062562 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:20.369955 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:20.448655 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:20.558247 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:21.146206 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:21.146430 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:21.146583 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:21.369821 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:21.449716 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:21.557482 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:21.870419 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:21.946721 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:22.057276 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:22.369234 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:22.445829 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:22.558185 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:22.868433 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:22.946122 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:23.058186 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:23.369103 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:23.446780 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:23.557910 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:23.868523 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:23.948218 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:24.058635 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:24.369267 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:24.446727 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:24.560115 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:24.870708 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:24.948022 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:25.057440 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:25.369049 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:25.447113 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:25.559108 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:25.868807 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:25.946982 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:26.057643 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:26.369512 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:26.446973 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:26.558218 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:26.868842 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:26.946645 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:27.057284 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:27.371199 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:27.449314 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:27.560470 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:27.747859 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 09:33:27.870193 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:27.947654 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:28.060006 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:28.369781 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:28.449185 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:28.558830 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:28.871041 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:28.875278 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.127370288s)
W1025 09:33:28.875314 519259 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:33:28.875336 519259 retry.go:31] will retry after 15.625687923s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:33:28.946567 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:29.059111 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:29.368411 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:29.446230 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:29.558806 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:29.869963 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:29.949326 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:30.058671 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:30.614288 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:30.615993 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:30.616824 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:30.872055 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:30.968765 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:31.057693 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:31.370159 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:31.446115 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:31.558621 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:31.869742 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:31.946630 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:32.057173 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:32.369689 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:32.448432 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:32.558884 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:32.872217 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:32.947705 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:33.060294 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:33.370741 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:33.449202 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:33.559788 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:33.871366 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:33.968689 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:34.062105 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:34.371443 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:34.448424 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:34.561676 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:34.868834 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:34.950587 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:35.061336 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:35.370283 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:35.446974 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:35.563219 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:35.870733 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:35.949665 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:36.209924 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:36.369430 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:36.471071 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:36.571102 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:36.868648 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:36.950308 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:37.058280 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:37.368302 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:37.446659 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:37.558225 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:37.868353 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:37.946410 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:38.061049 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:38.370276 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:38.448195 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:38.560114 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:38.872279 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:38.972751 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:39.071988 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:39.368208 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:39.446518 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:39.559811 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:39.869187 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:39.946701 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:40.057936 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:40.369521 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:40.447427 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:40.558379 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:40.871879 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:40.975228 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:41.068141 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:41.371026 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:41.449143 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:41.559969 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:41.872384 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:41.955913 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:42.059276 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:42.372298 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:42.447110 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:42.558174 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:42.870109 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:42.945577 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:43.059262 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:43.371731 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:43.449893 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:43.559867 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:43.869726 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:43.948326 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:44.059256 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:44.368772 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:44.449108 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:44.501164 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 09:33:44.561956 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:44.884096 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:44.949783 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:45.062056 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:45.372069 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:45.446702 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:45.561054 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:45.724222 519259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.223007817s)
W1025 09:33:45.724282 519259 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:33:45.724309 519259 retry.go:31] will retry after 34.747544977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1025 09:33:45.879785 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:45.949115 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:46.057940 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:46.369815 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:46.446746 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:46.560399 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:46.870138 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:46.946275 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:47.059520 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:47.369645 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:47.446998 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:47.557663 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:47.868956 519259 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1025 09:33:47.970297 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:48.058788 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:48.370723 519259 kapi.go:107] duration metric: took 1m19.005859824s to wait for app.kubernetes.io/name=ingress-nginx ...
I1025 09:33:48.449243 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:48.558554 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:48.951233 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:49.059490 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:49.449286 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:49.560444 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:49.949098 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:50.059313 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:50.448293 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:50.558445 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:50.947077 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:51.059336 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:51.447663 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:51.557594 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:51.947339 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:52.058391 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:52.448159 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:52.557856 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:52.949078 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:53.059485 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:53.447805 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:53.557963 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:53.947169 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:54.058460 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:54.447392 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:54.558870 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:54.948261 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:55.058132 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:55.447368 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:55.558126 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:55.947180 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:56.058285 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:56.447340 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:56.558600 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:56.948850 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:57.057745 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:57.446629 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:57.558813 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:57.946048 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:58.059020 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:58.447257 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:58.558592 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:58.948372 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:59.058343 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:59.447530 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:33:59.558536 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:33:59.949428 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1025 09:34:00.058980 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:00.447541 519259 kapi.go:107] duration metric: took 1m29.504572852s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1025 09:34:00.557172 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:01.057788 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:01.558992 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:02.057462 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:02.557985 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:03.059723 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:03.558722 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:04.058336 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:04.557438 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:05.058646 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:05.558324 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:06.058124 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:06.557704 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:07.058576 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:07.558124 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:08.057966 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:08.558553 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:09.059032 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:09.559493 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:10.058525 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:10.557824 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:11.058995 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:11.557964 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:12.059351 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:12.557898 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:13.058912 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:13.558374 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:14.057830 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:14.558310 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:15.058330 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:15.557622 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:16.062199 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:16.558232 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:17.058623 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:17.558679 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:18.058318 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:18.557801 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:19.058744 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:19.558151 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:20.057316 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:20.472796 519259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1025 09:34:20.560241 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:21.058674 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
W1025 09:34:21.122562 519259 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
W1025 09:34:21.122703 519259 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
]
I1025 09:34:21.558101 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:22.057754 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:22.558200 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:23.058882 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:23.558982 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:24.058235 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:24.557726 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:25.058566 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:25.558115 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:26.057744 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:26.558407 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:27.057962 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:27.558753 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:28.058896 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:28.558224 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:29.058594 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:29.558445 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:30.057979 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:30.557029 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:31.057430 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:31.557700 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:32.058636 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:32.558125 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:33.058441 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:33.558101 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:34.058170 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:34.557977 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:35.059385 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:35.557913 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:36.058774 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:36.557854 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:37.060024 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:37.557307 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:38.057811 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:38.558647 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:39.060481 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:39.558082 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:40.057536 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:40.558319 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:41.057910 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:41.560815 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:42.058847 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:42.557169 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:43.059186 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:43.557848 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:44.057363 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:44.557949 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:45.057798 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:45.558121 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:46.058202 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:46.558122 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:47.061056 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:47.557363 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:48.058122 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:48.557818 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:49.065123 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:49.557580 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:50.057934 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:50.558445 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:51.058131 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:51.563016 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:52.058442 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:52.557828 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:53.059301 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:53.558267 519259 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1025 09:34:54.058000 519259 kapi.go:107] duration metric: took 2m21.003581859s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1025 09:34:54.059589 519259 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-192357 cluster.
I1025 09:34:54.060659 519259 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1025 09:34:54.061697 519259 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1025 09:34:54.062818 519259 out.go:179] * Enabled addons: storage-provisioner, amd-gpu-device-plugin, cloud-spanner, registry-creds, ingress-dns, default-storageclass, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
I1025 09:34:54.063892 519259 addons.go:514] duration metric: took 2m32.739365672s for enable addons: enabled=[storage-provisioner amd-gpu-device-plugin cloud-spanner registry-creds ingress-dns default-storageclass metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
I1025 09:34:54.063935 519259 start.go:246] waiting for cluster config update ...
I1025 09:34:54.063956 519259 start.go:255] writing updated cluster config ...
I1025 09:34:54.064239 519259 ssh_runner.go:195] Run: rm -f paused
I1025 09:34:54.069934 519259 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1025 09:34:54.073567 519259 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-78npv" in "kube-system" namespace to be "Ready" or be gone ...
I1025 09:34:54.078089 519259 pod_ready.go:94] pod "coredns-66bc5c9577-78npv" is "Ready"
I1025 09:34:54.078104 519259 pod_ready.go:86] duration metric: took 4.520822ms for pod "coredns-66bc5c9577-78npv" in "kube-system" namespace to be "Ready" or be gone ...
I1025 09:34:54.080600 519259 pod_ready.go:83] waiting for pod "etcd-addons-192357" in "kube-system" namespace to be "Ready" or be gone ...
I1025 09:34:54.085119 519259 pod_ready.go:94] pod "etcd-addons-192357" is "Ready"
I1025 09:34:54.085138 519259 pod_ready.go:86] duration metric: took 4.517594ms for pod "etcd-addons-192357" in "kube-system" namespace to be "Ready" or be gone ...
I1025 09:34:54.087098 519259 pod_ready.go:83] waiting for pod "kube-apiserver-addons-192357" in "kube-system" namespace to be "Ready" or be gone ...
I1025 09:34:54.092332 519259 pod_ready.go:94] pod "kube-apiserver-addons-192357" is "Ready"
I1025 09:34:54.092351 519259 pod_ready.go:86] duration metric: took 5.23415ms for pod "kube-apiserver-addons-192357" in "kube-system" namespace to be "Ready" or be gone ...
I1025 09:34:54.094316 519259 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-192357" in "kube-system" namespace to be "Ready" or be gone ...
I1025 09:34:54.474514 519259 pod_ready.go:94] pod "kube-controller-manager-addons-192357" is "Ready"
I1025 09:34:54.474550 519259 pod_ready.go:86] duration metric: took 380.217899ms for pod "kube-controller-manager-addons-192357" in "kube-system" namespace to be "Ready" or be gone ...
I1025 09:34:54.674253 519259 pod_ready.go:83] waiting for pod "kube-proxy-t7dv4" in "kube-system" namespace to be "Ready" or be gone ...
I1025 09:34:55.075260 519259 pod_ready.go:94] pod "kube-proxy-t7dv4" is "Ready"
I1025 09:34:55.075288 519259 pod_ready.go:86] duration metric: took 401.006443ms for pod "kube-proxy-t7dv4" in "kube-system" namespace to be "Ready" or be gone ...
I1025 09:34:55.275107 519259 pod_ready.go:83] waiting for pod "kube-scheduler-addons-192357" in "kube-system" namespace to be "Ready" or be gone ...
I1025 09:34:55.676366 519259 pod_ready.go:94] pod "kube-scheduler-addons-192357" is "Ready"
I1025 09:34:55.676395 519259 pod_ready.go:86] duration metric: took 401.26071ms for pod "kube-scheduler-addons-192357" in "kube-system" namespace to be "Ready" or be gone ...
I1025 09:34:55.676406 519259 pod_ready.go:40] duration metric: took 1.606444225s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1025 09:34:55.722255 519259 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
I1025 09:34:55.723690 519259 out.go:179] * Done! kubectl is now configured to use "addons-192357" cluster and "default" namespace by default
==> CRI-O <==
Oct 25 09:38:11 addons-192357 crio[817]: time="2025-10-25 09:38:11.941426587Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=20032229-ed20-40b8-873f-8a955a3270f2 name=/runtime.v1.RuntimeService/ListContainers
Oct 25 09:38:11 addons-192357 crio[817]: time="2025-10-25 09:38:11.941478812Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=20032229-ed20-40b8-873f-8a955a3270f2 name=/runtime.v1.RuntimeService/ListContainers
Oct 25 09:38:11 addons-192357 crio[817]: time="2025-10-25 09:38:11.941782623Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a2cb819387bb749942da95db3ae5fd82d58a0ab340cd0a26823b8a7973b32b9,PodSandboxId:897df1345932e38e3f86d723b9d4c2e7fd1a7778fb18d516466f3b15d96c2a7c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761384948244193495,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 483f2351-0a72-4e13-a1e4-258f9c460626,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f5c736907bbd1ccef87a0841b3fddcbacaf4fed996175188eb29b7393c8e42,PodSandboxId:1119705bbbc91b0e5cb108d819843a0d61436aeb14cc9f2e548b83a289e11c89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761384900493212729,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a47a3b6-e208-4f4f-a0fc-95484356bbf4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dce2dd7f016003d6b4274c017ce79af814477b413bdf9c5cbf0a5142a28989e,PodSandboxId:56355ca7adaf4aaa546fe2466566b40f344494f4a77984700d5b457275ca08fd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761384827725624392,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-4sg7b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c2ef3cc2-f1d2-4415-aad8-80624af7e075,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:3007d5d9b87ff5a8e87eff71e92bea173404f081a645d4b08fb42c708bf021ca,PodSandboxId:25528dd9fcaba0e01571d6af1605d448677b65783b88dc1b90b145239fe92f01,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1761384812902279367,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-gvdhp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7288ad0f-fa5e-4e00-8ab8-d7060ba3c8ba,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5692922f62ea7af6bb89630ba2d7bd45cae20fcf9273c5416ba348afde6be4ab,PodSandboxId:8482a2f23e124c87a85cc4f6f79a68908152a2425ca4cc31ac1113ea9a144917,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761384812516728993,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6wz5v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28faf35c-7fd6-4cc3-8bd9-38f70a870a4c,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85eb071014eb37cf1ad22245d26c86a80b13d696082c9ce4a6cc4ed49c136b6e,PodSandboxId:b66b29855c11cc2041fc1709f0e2053f25a11d42d79599394ecd245b99ddd5f8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761384801697489576,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-lpq46,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 27e2fdc7-ffc4-4f8b-8e61-ad75ddcf6b7d,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8b0e31ea35ec83366317af5571b8c87a6251eb27630e5c8e174d5a879ee6adc,PodSandboxId:a5214c32c3864173071ea0d18b115f78190e44b8b985cb438ff424f93717f36b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761384786824144619,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d211fbc-bb73-4120-8e81-f7a0849b7d00,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d067f446f0eb847668be40583d57c553a3e22932958d50ef2d125f6b576aba6f,PodSandboxId:f337d3f0cf36ce42d2febe982813c5940e9c613e2589823c962deafe493f0e4e,Metadata:&ContainerMetadata{Name:nvidia-
device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:3c54348fe5a57e5700e7d8068e7531d2ef2d5f3ccb70c8f6bac0953432527abd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcbf0ecf3195887f4b6b497d542660d9e7b1409b502bfddc284c04e3d8155f57,State:CONTAINER_RUNNING,CreatedAt:1761384770165207735,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-wk7jc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c0a152-4a08-45d8-ab2a-dd0000ae9680,},Annotations:map[string]string{io.kubernetes.container.hash: f71f4593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79bf6998c3ad28685bb542052392a59cd1101990da93dc8350793990dcf672d0,PodSandboxId:5bce50485356851390ba2b706b789b604ec5bcb253b5910c24
de228dcf8aa562,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761384753018901956,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bmt9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0c0d8a6-5a55-4cda-9e4c-e27d5682c2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec7df59340af4c1eaa07788351c871e4cecbfb0605d6fdf26c5560034b2d8bc,PodSandboxId:b9203112ad
5848eb626e71d936c838dbb285dcd0101c4684b23f6ac42200e587,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761384749637494370,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6254c945-2633-4da3-b8a1-cad4e38a10a0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bbe3443120ba39b50cee1e53197112542b15a7fc752601f70aca67cb233db7,PodSandboxId:3127f96c0baeb94de48b04
e221941380df8258bfc235fd60a3904245431aab48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761384743540861082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-78npv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2eb535-b152-4564-bdb2-ab7693d6c4ca,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35bcf879471a5789c49c7b5e0aa6584b1d9d5d1d86a66e81c17d3c25c30f14a,PodSandboxId:f2597388f6315d820b3f9f336103cf495ee91b866eddc291b612536c9feca80b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761384742736976237,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t7dv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f132dc9b-7f9a-4f3b-8e83-fd7e779f86c5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08330f99452fd04c00d3ebf0081daaab6660dc3427391532830d3485ef7a6630,PodSandboxId:94dbd4309de0ab7cdcb9e135e7602b7e852eb5d75ce7ab5b4bf01b41ead78c3d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761384730988442729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-192357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a1bc1b6d2fd5e5c83e13a95727e6fbd,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\
"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc85502dd6933ebaa9c636252c2ed8deeefc7c1c6d8006bf1515451ed380d77a,PodSandboxId:8c998f398b56c359bde2553c13de51097722aec4dbc25973ebf69c46d87b4c94,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761384730976228816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-192357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7c4fb8cf1ef2453c2dff4b1338f9244,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80b591e8d70dd270c4e12d64052b5d0e794267800051ab89ad490437d19b9a2,PodSandboxId:07bbe3646cdc91b19ae3c7b401d1c8e518dba0c56a2dabfe02fce63fb3150e1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761384730960855538,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-192357,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: d89c129098ff337cde637f2cb05c6d91,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7d132b43254ef28561225f44344773ea7e974be75efa1028b7b242888bbe11,PodSandboxId:4784ab85e7f00fc7b76af978ff005bbe77f45bbc9e39aeafbd2fe975ed9a9501,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761384730939937537,Labels:map[string]string{io.kube
rnetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-192357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88753b2f38182730fd36a847747b1f92,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=20032229-ed20-40b8-873f-8a955a3270f2 name=/runtime.v1.RuntimeService/ListContainers
Oct 25 09:38:11 addons-192357 crio[817]: time="2025-10-25 09:38:11.960071729Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
Oct 25 09:38:11 addons-192357 crio[817]: time="2025-10-25 09:38:11.983870962Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4ff456f8-86f4-46cf-ad59-aa8571f95a59 name=/runtime.v1.RuntimeService/Version
Oct 25 09:38:11 addons-192357 crio[817]: time="2025-10-25 09:38:11.983931122Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4ff456f8-86f4-46cf-ad59-aa8571f95a59 name=/runtime.v1.RuntimeService/Version
Oct 25 09:38:11 addons-192357 crio[817]: time="2025-10-25 09:38:11.985173554Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=57092442-6ea5-4aae-87ce-e9f1cd9210c4 name=/runtime.v1.ImageService/ImageFsInfo
Oct 25 09:38:11 addons-192357 crio[817]: time="2025-10-25 09:38:11.986603147Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761385091986581136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57092442-6ea5-4aae-87ce-e9f1cd9210c4 name=/runtime.v1.ImageService/ImageFsInfo
Oct 25 09:38:11 addons-192357 crio[817]: time="2025-10-25 09:38:11.987127422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b89abbb-abb4-4a71-bf11-91779f1460f9 name=/runtime.v1.RuntimeService/ListContainers
Oct 25 09:38:11 addons-192357 crio[817]: time="2025-10-25 09:38:11.987187735Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b89abbb-abb4-4a71-bf11-91779f1460f9 name=/runtime.v1.RuntimeService/ListContainers
Oct 25 09:38:11 addons-192357 crio[817]: time="2025-10-25 09:38:11.987543989Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a2cb819387bb749942da95db3ae5fd82d58a0ab340cd0a26823b8a7973b32b9,PodSandboxId:897df1345932e38e3f86d723b9d4c2e7fd1a7778fb18d516466f3b15d96c2a7c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761384948244193495,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 483f2351-0a72-4e13-a1e4-258f9c460626,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f5c736907bbd1ccef87a0841b3fddcbacaf4fed996175188eb29b7393c8e42,PodSandboxId:1119705bbbc91b0e5cb108d819843a0d61436aeb14cc9f2e548b83a289e11c89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761384900493212729,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a47a3b6-e208-4f4f-a0fc-95484356bbf4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dce2dd7f016003d6b4274c017ce79af814477b413bdf9c5cbf0a5142a28989e,PodSandboxId:56355ca7adaf4aaa546fe2466566b40f344494f4a77984700d5b457275ca08fd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761384827725624392,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-4sg7b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c2ef3cc2-f1d2-4415-aad8-80624af7e075,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:3007d5d9b87ff5a8e87eff71e92bea173404f081a645d4b08fb42c708bf021ca,PodSandboxId:25528dd9fcaba0e01571d6af1605d448677b65783b88dc1b90b145239fe92f01,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1761384812902279367,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-gvdhp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7288ad0f-fa5e-4e00-8ab8-d7060ba3c8ba,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5692922f62ea7af6bb89630ba2d7bd45cae20fcf9273c5416ba348afde6be4ab,PodSandboxId:8482a2f23e124c87a85cc4f6f79a68908152a2425ca4cc31ac1113ea9a144917,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761384812516728993,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6wz5v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28faf35c-7fd6-4cc3-8bd9-38f70a870a4c,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85eb071014eb37cf1ad22245d26c86a80b13d696082c9ce4a6cc4ed49c136b6e,PodSandboxId:b66b29855c11cc2041fc1709f0e2053f25a11d42d79599394ecd245b99ddd5f8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761384801697489576,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-lpq46,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 27e2fdc7-ffc4-4f8b-8e61-ad75ddcf6b7d,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8b0e31ea35ec83366317af5571b8c87a6251eb27630e5c8e174d5a879ee6adc,PodSandboxId:a5214c32c3864173071ea0d18b115f78190e44b8b985cb438ff424f93717f36b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761384786824144619,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d211fbc-bb73-4120-8e81-f7a0849b7d00,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d067f446f0eb847668be40583d57c553a3e22932958d50ef2d125f6b576aba6f,PodSandboxId:f337d3f0cf36ce42d2febe982813c5940e9c613e2589823c962deafe493f0e4e,Metadata:&ContainerMetadata{Name:nvidia-
device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:3c54348fe5a57e5700e7d8068e7531d2ef2d5f3ccb70c8f6bac0953432527abd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcbf0ecf3195887f4b6b497d542660d9e7b1409b502bfddc284c04e3d8155f57,State:CONTAINER_RUNNING,CreatedAt:1761384770165207735,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-wk7jc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c0a152-4a08-45d8-ab2a-dd0000ae9680,},Annotations:map[string]string{io.kubernetes.container.hash: f71f4593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79bf6998c3ad28685bb542052392a59cd1101990da93dc8350793990dcf672d0,PodSandboxId:5bce50485356851390ba2b706b789b604ec5bcb253b5910c24
de228dcf8aa562,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761384753018901956,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bmt9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0c0d8a6-5a55-4cda-9e4c-e27d5682c2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec7df59340af4c1eaa07788351c871e4cecbfb0605d6fdf26c5560034b2d8bc,PodSandboxId:b9203112ad
5848eb626e71d936c838dbb285dcd0101c4684b23f6ac42200e587,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761384749637494370,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6254c945-2633-4da3-b8a1-cad4e38a10a0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bbe3443120ba39b50cee1e53197112542b15a7fc752601f70aca67cb233db7,PodSandboxId:3127f96c0baeb94de48b04
e221941380df8258bfc235fd60a3904245431aab48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761384743540861082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-78npv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2eb535-b152-4564-bdb2-ab7693d6c4ca,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35bcf879471a5789c49c7b5e0aa6584b1d9d5d1d86a66e81c17d3c25c30f14a,PodSandboxId:f2597388f6315d820b3f9f336103cf495ee91b866eddc291b612536c9feca80b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761384742736976237,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t7dv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f132dc9b-7f9a-4f3b-8e83-fd7e779f86c5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08330f99452fd04c00d3ebf0081daaab6660dc3427391532830d3485ef7a6630,PodSandboxId:94dbd4309de0ab7cdcb9e135e7602b7e852eb5d75ce7ab5b4bf01b41ead78c3d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761384730988442729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-192357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a1bc1b6d2fd5e5c83e13a95727e6fbd,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\
"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc85502dd6933ebaa9c636252c2ed8deeefc7c1c6d8006bf1515451ed380d77a,PodSandboxId:8c998f398b56c359bde2553c13de51097722aec4dbc25973ebf69c46d87b4c94,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761384730976228816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-192357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7c4fb8cf1ef2453c2dff4b1338f9244,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80b591e8d70dd270c4e12d64052b5d0e794267800051ab89ad490437d19b9a2,PodSandboxId:07bbe3646cdc91b19ae3c7b401d1c8e518dba0c56a2dabfe02fce63fb3150e1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761384730960855538,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-192357,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: d89c129098ff337cde637f2cb05c6d91,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7d132b43254ef28561225f44344773ea7e974be75efa1028b7b242888bbe11,PodSandboxId:4784ab85e7f00fc7b76af978ff005bbe77f45bbc9e39aeafbd2fe975ed9a9501,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761384730939937537,Labels:map[string]string{io.kube
rnetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-192357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88753b2f38182730fd36a847747b1f92,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b89abbb-abb4-4a71-bf11-91779f1460f9 name=/runtime.v1.RuntimeService/ListContainers
Oct 25 09:38:12 addons-192357 crio[817]: time="2025-10-25 09:38:12.026376591Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4ba429d9-1c4c-48a5-adfd-5175a15bc85e name=/runtime.v1.RuntimeService/Version
Oct 25 09:38:12 addons-192357 crio[817]: time="2025-10-25 09:38:12.026464391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4ba429d9-1c4c-48a5-adfd-5175a15bc85e name=/runtime.v1.RuntimeService/Version
Oct 25 09:38:12 addons-192357 crio[817]: time="2025-10-25 09:38:12.027638426Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b88cc6be-5fdd-427d-b4b1-897b2a441e33 name=/runtime.v1.ImageService/ImageFsInfo
Oct 25 09:38:12 addons-192357 crio[817]: time="2025-10-25 09:38:12.029265025Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761385092029207211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b88cc6be-5fdd-427d-b4b1-897b2a441e33 name=/runtime.v1.ImageService/ImageFsInfo
Oct 25 09:38:12 addons-192357 crio[817]: time="2025-10-25 09:38:12.029886642Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=adc92f86-df27-4429-b0ec-e44c231170bb name=/runtime.v1.RuntimeService/ListContainers
Oct 25 09:38:12 addons-192357 crio[817]: time="2025-10-25 09:38:12.030195819Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=adc92f86-df27-4429-b0ec-e44c231170bb name=/runtime.v1.RuntimeService/ListContainers
Oct 25 09:38:12 addons-192357 crio[817]: time="2025-10-25 09:38:12.030815324Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a2cb819387bb749942da95db3ae5fd82d58a0ab340cd0a26823b8a7973b32b9,PodSandboxId:897df1345932e38e3f86d723b9d4c2e7fd1a7778fb18d516466f3b15d96c2a7c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761384948244193495,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 483f2351-0a72-4e13-a1e4-258f9c460626,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f5c736907bbd1ccef87a0841b3fddcbacaf4fed996175188eb29b7393c8e42,PodSandboxId:1119705bbbc91b0e5cb108d819843a0d61436aeb14cc9f2e548b83a289e11c89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761384900493212729,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a47a3b6-e208-4f4f-a0fc-95484356bbf4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dce2dd7f016003d6b4274c017ce79af814477b413bdf9c5cbf0a5142a28989e,PodSandboxId:56355ca7adaf4aaa546fe2466566b40f344494f4a77984700d5b457275ca08fd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761384827725624392,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-4sg7b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c2ef3cc2-f1d2-4415-aad8-80624af7e075,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:3007d5d9b87ff5a8e87eff71e92bea173404f081a645d4b08fb42c708bf021ca,PodSandboxId:25528dd9fcaba0e01571d6af1605d448677b65783b88dc1b90b145239fe92f01,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1761384812902279367,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-gvdhp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7288ad0f-fa5e-4e00-8ab8-d7060ba3c8ba,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5692922f62ea7af6bb89630ba2d7bd45cae20fcf9273c5416ba348afde6be4ab,PodSandboxId:8482a2f23e124c87a85cc4f6f79a68908152a2425ca4cc31ac1113ea9a144917,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761384812516728993,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6wz5v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28faf35c-7fd6-4cc3-8bd9-38f70a870a4c,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85eb071014eb37cf1ad22245d26c86a80b13d696082c9ce4a6cc4ed49c136b6e,PodSandboxId:b66b29855c11cc2041fc1709f0e2053f25a11d42d79599394ecd245b99ddd5f8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761384801697489576,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-lpq46,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 27e2fdc7-ffc4-4f8b-8e61-ad75ddcf6b7d,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8b0e31ea35ec83366317af5571b8c87a6251eb27630e5c8e174d5a879ee6adc,PodSandboxId:a5214c32c3864173071ea0d18b115f78190e44b8b985cb438ff424f93717f36b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761384786824144619,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d211fbc-bb73-4120-8e81-f7a0849b7d00,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d067f446f0eb847668be40583d57c553a3e22932958d50ef2d125f6b576aba6f,PodSandboxId:f337d3f0cf36ce42d2febe982813c5940e9c613e2589823c962deafe493f0e4e,Metadata:&ContainerMetadata{Name:nvidia-
device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:3c54348fe5a57e5700e7d8068e7531d2ef2d5f3ccb70c8f6bac0953432527abd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcbf0ecf3195887f4b6b497d542660d9e7b1409b502bfddc284c04e3d8155f57,State:CONTAINER_RUNNING,CreatedAt:1761384770165207735,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-wk7jc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c0a152-4a08-45d8-ab2a-dd0000ae9680,},Annotations:map[string]string{io.kubernetes.container.hash: f71f4593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79bf6998c3ad28685bb542052392a59cd1101990da93dc8350793990dcf672d0,PodSandboxId:5bce50485356851390ba2b706b789b604ec5bcb253b5910c24
de228dcf8aa562,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761384753018901956,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bmt9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0c0d8a6-5a55-4cda-9e4c-e27d5682c2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec7df59340af4c1eaa07788351c871e4cecbfb0605d6fdf26c5560034b2d8bc,PodSandboxId:b9203112ad
5848eb626e71d936c838dbb285dcd0101c4684b23f6ac42200e587,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761384749637494370,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6254c945-2633-4da3-b8a1-cad4e38a10a0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bbe3443120ba39b50cee1e53197112542b15a7fc752601f70aca67cb233db7,PodSandboxId:3127f96c0baeb94de48b04
e221941380df8258bfc235fd60a3904245431aab48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761384743540861082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-78npv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2eb535-b152-4564-bdb2-ab7693d6c4ca,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35bcf879471a5789c49c7b5e0aa6584b1d9d5d1d86a66e81c17d3c25c30f14a,PodSandboxId:f2597388f6315d820b3f9f336103cf495ee91b866eddc291b612536c9feca80b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761384742736976237,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t7dv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f132dc9b-7f9a-4f3b-8e83-fd7e779f86c5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08330f99452fd04c00d3ebf0081daaab6660dc3427391532830d3485ef7a6630,PodSandboxId:94dbd4309de0ab7cdcb9e135e7602b7e852eb5d75ce7ab5b4bf01b41ead78c3d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761384730988442729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-192357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a1bc1b6d2fd5e5c83e13a95727e6fbd,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\
"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc85502dd6933ebaa9c636252c2ed8deeefc7c1c6d8006bf1515451ed380d77a,PodSandboxId:8c998f398b56c359bde2553c13de51097722aec4dbc25973ebf69c46d87b4c94,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761384730976228816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-192357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7c4fb8cf1ef2453c2dff4b1338f9244,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80b591e8d70dd270c4e12d64052b5d0e794267800051ab89ad490437d19b9a2,PodSandboxId:07bbe3646cdc91b19ae3c7b401d1c8e518dba0c56a2dabfe02fce63fb3150e1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761384730960855538,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-192357,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: d89c129098ff337cde637f2cb05c6d91,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7d132b43254ef28561225f44344773ea7e974be75efa1028b7b242888bbe11,PodSandboxId:4784ab85e7f00fc7b76af978ff005bbe77f45bbc9e39aeafbd2fe975ed9a9501,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761384730939937537,Labels:map[string]string{io.kube
rnetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-192357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88753b2f38182730fd36a847747b1f92,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=adc92f86-df27-4429-b0ec-e44c231170bb name=/runtime.v1.RuntimeService/ListContainers
Oct 25 09:38:12 addons-192357 crio[817]: time="2025-10-25 09:38:12.064532767Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=06ea134b-7a01-4fca-8986-dd867de7b73d name=/runtime.v1.RuntimeService/Version
Oct 25 09:38:12 addons-192357 crio[817]: time="2025-10-25 09:38:12.064746289Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=06ea134b-7a01-4fca-8986-dd867de7b73d name=/runtime.v1.RuntimeService/Version
Oct 25 09:38:12 addons-192357 crio[817]: time="2025-10-25 09:38:12.066961354Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5d76b5fa-4749-44a4-97c1-7baa543cbeac name=/runtime.v1.ImageService/ImageFsInfo
Oct 25 09:38:12 addons-192357 crio[817]: time="2025-10-25 09:38:12.068743593Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761385092068714082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5d76b5fa-4749-44a4-97c1-7baa543cbeac name=/runtime.v1.ImageService/ImageFsInfo
Oct 25 09:38:12 addons-192357 crio[817]: time="2025-10-25 09:38:12.069419836Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55cf28c6-a715-43a3-998b-2e6c3836572b name=/runtime.v1.RuntimeService/ListContainers
Oct 25 09:38:12 addons-192357 crio[817]: time="2025-10-25 09:38:12.069470693Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55cf28c6-a715-43a3-998b-2e6c3836572b name=/runtime.v1.RuntimeService/ListContainers
Oct 25 09:38:12 addons-192357 crio[817]: time="2025-10-25 09:38:12.069797599Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a2cb819387bb749942da95db3ae5fd82d58a0ab340cd0a26823b8a7973b32b9,PodSandboxId:897df1345932e38e3f86d723b9d4c2e7fd1a7778fb18d516466f3b15d96c2a7c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761384948244193495,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 483f2351-0a72-4e13-a1e4-258f9c460626,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f5c736907bbd1ccef87a0841b3fddcbacaf4fed996175188eb29b7393c8e42,PodSandboxId:1119705bbbc91b0e5cb108d819843a0d61436aeb14cc9f2e548b83a289e11c89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761384900493212729,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a47a3b6-e208-4f4f-a0fc-95484356bbf4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dce2dd7f016003d6b4274c017ce79af814477b413bdf9c5cbf0a5142a28989e,PodSandboxId:56355ca7adaf4aaa546fe2466566b40f344494f4a77984700d5b457275ca08fd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761384827725624392,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-4sg7b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c2ef3cc2-f1d2-4415-aad8-80624af7e075,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:3007d5d9b87ff5a8e87eff71e92bea173404f081a645d4b08fb42c708bf021ca,PodSandboxId:25528dd9fcaba0e01571d6af1605d448677b65783b88dc1b90b145239fe92f01,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1761384812902279367,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-gvdhp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7288ad0f-fa5e-4e00-8ab8-d7060ba3c8ba,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5692922f62ea7af6bb89630ba2d7bd45cae20fcf9273c5416ba348afde6be4ab,PodSandboxId:8482a2f23e124c87a85cc4f6f79a68908152a2425ca4cc31ac1113ea9a144917,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761384812516728993,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6wz5v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28faf35c-7fd6-4cc3-8bd9-38f70a870a4c,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85eb071014eb37cf1ad22245d26c86a80b13d696082c9ce4a6cc4ed49c136b6e,PodSandboxId:b66b29855c11cc2041fc1709f0e2053f25a11d42d79599394ecd245b99ddd5f8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761384801697489576,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-lpq46,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 27e2fdc7-ffc4-4f8b-8e61-ad75ddcf6b7d,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8b0e31ea35ec83366317af5571b8c87a6251eb27630e5c8e174d5a879ee6adc,PodSandboxId:a5214c32c3864173071ea0d18b115f78190e44b8b985cb438ff424f93717f36b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761384786824144619,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d211fbc-bb73-4120-8e81-f7a0849b7d00,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d067f446f0eb847668be40583d57c553a3e22932958d50ef2d125f6b576aba6f,PodSandboxId:f337d3f0cf36ce42d2febe982813c5940e9c613e2589823c962deafe493f0e4e,Metadata:&ContainerMetadata{Name:nvidia-
device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:3c54348fe5a57e5700e7d8068e7531d2ef2d5f3ccb70c8f6bac0953432527abd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcbf0ecf3195887f4b6b497d542660d9e7b1409b502bfddc284c04e3d8155f57,State:CONTAINER_RUNNING,CreatedAt:1761384770165207735,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-wk7jc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c0a152-4a08-45d8-ab2a-dd0000ae9680,},Annotations:map[string]string{io.kubernetes.container.hash: f71f4593,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79bf6998c3ad28685bb542052392a59cd1101990da93dc8350793990dcf672d0,PodSandboxId:5bce50485356851390ba2b706b789b604ec5bcb253b5910c24
de228dcf8aa562,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761384753018901956,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bmt9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0c0d8a6-5a55-4cda-9e4c-e27d5682c2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec7df59340af4c1eaa07788351c871e4cecbfb0605d6fdf26c5560034b2d8bc,PodSandboxId:b9203112ad
5848eb626e71d936c838dbb285dcd0101c4684b23f6ac42200e587,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761384749637494370,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6254c945-2633-4da3-b8a1-cad4e38a10a0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bbe3443120ba39b50cee1e53197112542b15a7fc752601f70aca67cb233db7,PodSandboxId:3127f96c0baeb94de48b04
e221941380df8258bfc235fd60a3904245431aab48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761384743540861082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-78npv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2eb535-b152-4564-bdb2-ab7693d6c4ca,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35bcf879471a5789c49c7b5e0aa6584b1d9d5d1d86a66e81c17d3c25c30f14a,PodSandboxId:f2597388f6315d820b3f9f336103cf495ee91b866eddc291b612536c9feca80b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761384742736976237,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t7dv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f132dc9b-7f9a-4f3b-8e83-fd7e779f86c5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08330f99452fd04c00d3ebf0081daaab6660dc3427391532830d3485ef7a6630,PodSandboxId:94dbd4309de0ab7cdcb9e135e7602b7e852eb5d75ce7ab5b4bf01b41ead78c3d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761384730988442729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-192357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a1bc1b6d2fd5e5c83e13a95727e6fbd,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\
"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc85502dd6933ebaa9c636252c2ed8deeefc7c1c6d8006bf1515451ed380d77a,PodSandboxId:8c998f398b56c359bde2553c13de51097722aec4dbc25973ebf69c46d87b4c94,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761384730976228816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-192357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7c4fb8cf1ef2453c2dff4b1338f9244,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80b591e8d70dd270c4e12d64052b5d0e794267800051ab89ad490437d19b9a2,PodSandboxId:07bbe3646cdc91b19ae3c7b401d1c8e518dba0c56a2dabfe02fce63fb3150e1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761384730960855538,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-192357,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: d89c129098ff337cde637f2cb05c6d91,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7d132b43254ef28561225f44344773ea7e974be75efa1028b7b242888bbe11,PodSandboxId:4784ab85e7f00fc7b76af978ff005bbe77f45bbc9e39aeafbd2fe975ed9a9501,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761384730939937537,Labels:map[string]string{io.kube
rnetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-192357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88753b2f38182730fd36a847747b1f92,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55cf28c6-a715-43a3-998b-2e6c3836572b name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
5a2cb819387bb docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22 2 minutes ago Running nginx 0 897df1345932e nginx
63f5c736907bb gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 3 minutes ago Running busybox 0 1119705bbbc91 busybox
9dce2dd7f0160 registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd 4 minutes ago Running controller 0 56355ca7adaf4 ingress-nginx-controller-675c5ddd98-4sg7b
3007d5d9b87ff 08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2 4 minutes ago Exited patch 1 25528dd9fcaba ingress-nginx-admission-patch-gvdhp
5692922f62ea7 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39 4 minutes ago Exited create 0 8482a2f23e124 ingress-nginx-admission-create-6wz5v
85eb071014eb3 ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb 4 minutes ago Running gadget 0 b66b29855c11c gadget-lpq46
d8b0e31ea35ec docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 5 minutes ago Running minikube-ingress-dns 0 a5214c32c3864 kube-ingress-dns-minikube
d067f446f0eb8 nvcr.io/nvidia/k8s-device-plugin@sha256:3c54348fe5a57e5700e7d8068e7531d2ef2d5f3ccb70c8f6bac0953432527abd 5 minutes ago Running nvidia-device-plugin-ctr 0 f337d3f0cf36c nvidia-device-plugin-daemonset-wk7jc
79bf6998c3ad2 docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 5 minutes ago Running amd-gpu-device-plugin 0 5bce504853568 amd-gpu-device-plugin-bmt9x
6ec7df59340af 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 5 minutes ago Running storage-provisioner 0 b9203112ad584 storage-provisioner
d9bbe3443120b 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 5 minutes ago Running coredns 0 3127f96c0baeb coredns-66bc5c9577-78npv
f35bcf879471a fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7 5 minutes ago Running kube-proxy 0 f2597388f6315 kube-proxy-t7dv4
08330f99452fd 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813 6 minutes ago Running kube-scheduler 0 94dbd4309de0a kube-scheduler-addons-192357
fc85502dd6933 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115 6 minutes ago Running etcd 0 8c998f398b56c etcd-addons-192357
a80b591e8d70d c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97 6 minutes ago Running kube-apiserver 0 07bbe3646cdc9 kube-apiserver-addons-192357
fb7d132b43254 c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f 6 minutes ago Running kube-controller-manager 0 4784ab85e7f00 kube-controller-manager-addons-192357
==> coredns [d9bbe3443120ba39b50cee1e53197112542b15a7fc752601f70aca67cb233db7] <==
[INFO] 10.244.0.8:44388 - 10364 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000092893s
[INFO] 10.244.0.8:44388 - 62714 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000245693s
[INFO] 10.244.0.8:44388 - 46313 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000064051s
[INFO] 10.244.0.8:44388 - 52070 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00020967s
[INFO] 10.244.0.8:44388 - 61078 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000068027s
[INFO] 10.244.0.8:44388 - 57219 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000081083s
[INFO] 10.244.0.8:44388 - 15248 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.0000701s
[INFO] 10.244.0.8:55774 - 17057 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000105913s
[INFO] 10.244.0.8:55774 - 18724 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000155811s
[INFO] 10.244.0.8:37996 - 65274 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091081s
[INFO] 10.244.0.8:37996 - 64978 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000200175s
[INFO] 10.244.0.8:36084 - 10302 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000098246s
[INFO] 10.244.0.8:36084 - 10740 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000103369s
[INFO] 10.244.0.8:34099 - 8673 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121424s
[INFO] 10.244.0.8:34099 - 8853 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000131696s
[INFO] 10.244.0.23:52167 - 9009 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000417204s
[INFO] 10.244.0.23:48087 - 31280 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000270816s
[INFO] 10.244.0.23:55806 - 8157 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000079253s
[INFO] 10.244.0.23:34686 - 2559 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000080166s
[INFO] 10.244.0.23:44222 - 19847 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000170114s
[INFO] 10.244.0.23:36343 - 48214 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000073389s
[INFO] 10.244.0.23:52091 - 25318 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001095305s
[INFO] 10.244.0.23:56687 - 33657 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003038453s
[INFO] 10.244.0.28:44152 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000407552s
[INFO] 10.244.0.28:38196 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000604118s
==> describe nodes <==
Name: addons-192357
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-192357
kubernetes.io/os=linux
minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
minikube.k8s.io/name=addons-192357
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_10_25T09_32_17_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-192357
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 25 Oct 2025 09:32:13 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-192357
AcquireTime: <unset>
RenewTime: Sat, 25 Oct 2025 09:38:04 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 25 Oct 2025 09:36:21 +0000 Sat, 25 Oct 2025 09:32:11 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 25 Oct 2025 09:36:21 +0000 Sat, 25 Oct 2025 09:32:11 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 25 Oct 2025 09:36:21 +0000 Sat, 25 Oct 2025 09:32:11 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 25 Oct 2025 09:36:21 +0000 Sat, 25 Oct 2025 09:32:18 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.24
Hostname: addons-192357
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4008588Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4008588Ki
pods: 110
System Info:
Machine ID: f5973cda48fd4955bdc30658b1e0d6a0
System UUID: f5973cda-48fd-4955-bdc3-0658b1e0d6a0
Boot ID: 9c3e4de3-fc41-4560-922d-dfd8d0c4589e
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (15 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m16s
default hello-world-app-5d498dc89-9mrbg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m30s
gadget gadget-lpq46 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m45s
ingress-nginx ingress-nginx-controller-675c5ddd98-4sg7b 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 5m43s
kube-system amd-gpu-device-plugin-bmt9x 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m48s
kube-system coredns-66bc5c9577-78npv 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 5m50s
kube-system etcd-addons-192357 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 5m57s
kube-system kube-apiserver-addons-192357 250m (12%) 0 (0%) 0 (0%) 0 (0%) 5m55s
kube-system kube-controller-manager-addons-192357 200m (10%) 0 (0%) 0 (0%) 0 (0%) 5m57s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m45s
kube-system kube-proxy-t7dv4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m50s
kube-system kube-scheduler-addons-192357 100m (5%) 0 (0%) 0 (0%) 0 (0%) 5m55s
kube-system nvidia-device-plugin-daemonset-wk7jc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m48s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m45s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 5m48s kube-proxy
Normal Starting 5m56s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 5m56s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 5m55s kubelet Node addons-192357 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m55s kubelet Node addons-192357 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m55s kubelet Node addons-192357 status is now: NodeHasSufficientPID
Normal NodeReady 5m54s kubelet Node addons-192357 status is now: NodeReady
Normal RegisteredNode 5m51s node-controller Node addons-192357 event: Registered Node addons-192357 in Controller
==> dmesg <==
[ +16.137444] kauditd_printk_skb: 245 callbacks suppressed
[ +6.763273] kauditd_printk_skb: 5 callbacks suppressed
[Oct25 09:33] kauditd_printk_skb: 17 callbacks suppressed
[ +8.310331] kauditd_printk_skb: 26 callbacks suppressed
[ +6.568167] kauditd_printk_skb: 26 callbacks suppressed
[ +7.118146] kauditd_printk_skb: 20 callbacks suppressed
[ +0.367082] kauditd_printk_skb: 101 callbacks suppressed
[ +4.104125] kauditd_printk_skb: 76 callbacks suppressed
[ +5.000651] kauditd_printk_skb: 66 callbacks suppressed
[ +5.028248] kauditd_printk_skb: 56 callbacks suppressed
[ +8.923312] kauditd_printk_skb: 41 callbacks suppressed
[Oct25 09:34] kauditd_printk_skb: 17 callbacks suppressed
[ +0.000065] kauditd_printk_skb: 47 callbacks suppressed
[Oct25 09:35] kauditd_printk_skb: 41 callbacks suppressed
[ +0.000041] kauditd_printk_skb: 22 callbacks suppressed
[ +1.367971] kauditd_printk_skb: 107 callbacks suppressed
[ +4.197249] kauditd_printk_skb: 49 callbacks suppressed
[ +0.364014] kauditd_printk_skb: 118 callbacks suppressed
[ +0.000044] kauditd_printk_skb: 126 callbacks suppressed
[ +5.129357] kauditd_printk_skb: 32 callbacks suppressed
[ +3.955847] kauditd_printk_skb: 67 callbacks suppressed
[ +4.530241] kauditd_printk_skb: 63 callbacks suppressed
[Oct25 09:36] kauditd_printk_skb: 22 callbacks suppressed
[ +8.892623] kauditd_printk_skb: 61 callbacks suppressed
[Oct25 09:38] kauditd_printk_skb: 127 callbacks suppressed
==> etcd [fc85502dd6933ebaa9c636252c2ed8deeefc7c1c6d8006bf1515451ed380d77a] <==
{"level":"info","ts":"2025-10-25T09:33:21.139123Z","caller":"traceutil/trace.go:172","msg":"trace[672362248] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:987; }","duration":"208.976175ms","start":"2025-10-25T09:33:20.930135Z","end":"2025-10-25T09:33:21.139112Z","steps":["trace[672362248] 'agreement among raft nodes before linearized reading' (duration: 208.781659ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-25T09:33:21.139613Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"175.933959ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-10-25T09:33:21.139846Z","caller":"traceutil/trace.go:172","msg":"trace[915144000] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:987; }","duration":"176.167686ms","start":"2025-10-25T09:33:20.963669Z","end":"2025-10-25T09:33:21.139837Z","steps":["trace[915144000] 'agreement among raft nodes before linearized reading' (duration: 175.920709ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-25T09:33:21.139262Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"275.530067ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-10-25T09:33:21.140266Z","caller":"traceutil/trace.go:172","msg":"trace[1606050474] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:987; }","duration":"276.53851ms","start":"2025-10-25T09:33:20.863719Z","end":"2025-10-25T09:33:21.140257Z","steps":["trace[1606050474] 'agreement among raft nodes before linearized reading' (duration: 270.714143ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-25T09:33:21.140581Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"197.845015ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-10-25T09:33:21.140685Z","caller":"traceutil/trace.go:172","msg":"trace[491197053] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:987; }","duration":"197.928528ms","start":"2025-10-25T09:33:20.942727Z","end":"2025-10-25T09:33:21.140656Z","steps":["trace[491197053] 'agreement among raft nodes before linearized reading' (duration: 197.829499ms)"],"step_count":1}
{"level":"info","ts":"2025-10-25T09:33:30.608527Z","caller":"traceutil/trace.go:172","msg":"trace[1794292544] linearizableReadLoop","detail":"{readStateIndex:1044; appliedIndex:1044; }","duration":"245.292676ms","start":"2025-10-25T09:33:30.363211Z","end":"2025-10-25T09:33:30.608503Z","steps":["trace[1794292544] 'read index received' (duration: 245.28685ms)","trace[1794292544] 'applied index is now lower than readState.Index' (duration: 5.21µs)"],"step_count":2}
{"level":"warn","ts":"2025-10-25T09:33:30.608688Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"245.454244ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-10-25T09:33:30.608769Z","caller":"traceutil/trace.go:172","msg":"trace[974730388] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1014; }","duration":"245.553588ms","start":"2025-10-25T09:33:30.363206Z","end":"2025-10-25T09:33:30.608760Z","steps":["trace[974730388] 'agreement among raft nodes before linearized reading' (duration: 245.425981ms)"],"step_count":1}
{"level":"info","ts":"2025-10-25T09:33:30.608871Z","caller":"traceutil/trace.go:172","msg":"trace[358544004] transaction","detail":"{read_only:false; response_revision:1015; number_of_response:1; }","duration":"279.579456ms","start":"2025-10-25T09:33:30.329280Z","end":"2025-10-25T09:33:30.608860Z","steps":["trace[358544004] 'process raft request' (duration: 279.2513ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-25T09:33:30.609110Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"167.59112ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-10-25T09:33:30.609437Z","caller":"traceutil/trace.go:172","msg":"trace[274571824] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1015; }","duration":"167.921844ms","start":"2025-10-25T09:33:30.441506Z","end":"2025-10-25T09:33:30.609428Z","steps":["trace[274571824] 'agreement among raft nodes before linearized reading' (duration: 167.574216ms)"],"step_count":1}
{"level":"info","ts":"2025-10-25T09:33:36.205074Z","caller":"traceutil/trace.go:172","msg":"trace[184621497] linearizableReadLoop","detail":"{readStateIndex:1084; appliedIndex:1084; }","duration":"151.943605ms","start":"2025-10-25T09:33:36.053115Z","end":"2025-10-25T09:33:36.205058Z","steps":["trace[184621497] 'read index received' (duration: 151.938281ms)","trace[184621497] 'applied index is now lower than readState.Index' (duration: 4.646µs)"],"step_count":2}
{"level":"warn","ts":"2025-10-25T09:33:36.205159Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"152.039331ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-10-25T09:33:36.205176Z","caller":"traceutil/trace.go:172","msg":"trace[1435031726] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1052; }","duration":"152.076074ms","start":"2025-10-25T09:33:36.053095Z","end":"2025-10-25T09:33:36.205171Z","steps":["trace[1435031726] 'agreement among raft nodes before linearized reading' (duration: 152.014654ms)"],"step_count":1}
{"level":"info","ts":"2025-10-25T09:33:36.205509Z","caller":"traceutil/trace.go:172","msg":"trace[1149721048] transaction","detail":"{read_only:false; response_revision:1053; number_of_response:1; }","duration":"240.733597ms","start":"2025-10-25T09:33:35.964767Z","end":"2025-10-25T09:33:36.205500Z","steps":["trace[1149721048] 'process raft request' (duration: 240.648706ms)"],"step_count":1}
{"level":"info","ts":"2025-10-25T09:33:45.874979Z","caller":"traceutil/trace.go:172","msg":"trace[1909366842] transaction","detail":"{read_only:false; response_revision:1112; number_of_response:1; }","duration":"158.020769ms","start":"2025-10-25T09:33:45.716948Z","end":"2025-10-25T09:33:45.874968Z","steps":["trace[1909366842] 'process raft request' (duration: 156.603737ms)"],"step_count":1}
{"level":"info","ts":"2025-10-25T09:35:22.948214Z","caller":"traceutil/trace.go:172","msg":"trace[1790214615] transaction","detail":"{read_only:false; response_revision:1452; number_of_response:1; }","duration":"217.431841ms","start":"2025-10-25T09:35:22.730766Z","end":"2025-10-25T09:35:22.948198Z","steps":["trace[1790214615] 'process raft request' (duration: 216.601851ms)"],"step_count":1}
{"level":"info","ts":"2025-10-25T09:35:25.393474Z","caller":"traceutil/trace.go:172","msg":"trace[1414620673] linearizableReadLoop","detail":"{readStateIndex:1512; appliedIndex:1512; }","duration":"136.202548ms","start":"2025-10-25T09:35:25.257216Z","end":"2025-10-25T09:35:25.393419Z","steps":["trace[1414620673] 'read index received' (duration: 136.197377ms)","trace[1414620673] 'applied index is now lower than readState.Index' (duration: 4.417µs)"],"step_count":2}
{"level":"info","ts":"2025-10-25T09:35:25.393945Z","caller":"traceutil/trace.go:172","msg":"trace[474730309] transaction","detail":"{read_only:false; response_revision:1457; number_of_response:1; }","duration":"166.787755ms","start":"2025-10-25T09:35:25.227147Z","end":"2025-10-25T09:35:25.393935Z","steps":["trace[474730309] 'process raft request' (duration: 166.32418ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-25T09:35:25.393824Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.588716ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-10-25T09:35:25.394377Z","caller":"traceutil/trace.go:172","msg":"trace[871011921] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1456; }","duration":"137.155955ms","start":"2025-10-25T09:35:25.257213Z","end":"2025-10-25T09:35:25.394369Z","steps":["trace[871011921] 'agreement among raft nodes before linearized reading' (duration: 136.35297ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-25T09:35:42.455683Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.768426ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1654397877643038936 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/ipaddresses/10.97.212.9\" mod_revision:0 > success:<request_put:<key:\"/registry/ipaddresses/10.97.212.9\" value_size:529 >> failure:<>>","response":"size:16"}
{"level":"info","ts":"2025-10-25T09:35:42.456032Z","caller":"traceutil/trace.go:172","msg":"trace[623501063] transaction","detail":"{read_only:false; response_revision:1626; number_of_response:1; }","duration":"222.420388ms","start":"2025-10-25T09:35:42.233588Z","end":"2025-10-25T09:35:42.456009Z","steps":["trace[623501063] 'process raft request' (duration: 95.27834ms)","trace[623501063] 'compare' (duration: 126.682928ms)"],"step_count":2}
==> kernel <==
09:38:12 up 6 min, 0 users, load average: 0.16, 0.57, 0.36
Linux addons-192357 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [a80b591e8d70dd270c4e12d64052b5d0e794267800051ab89ad490437d19b9a2] <==
> logger="UnhandledError"
E1025 09:33:33.883197 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.50.36:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.50.36:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
I1025 09:33:33.927877 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1025 09:35:07.462374 1 conn.go:339] Error on socket receive: read tcp 192.168.39.24:8443->192.168.39.1:49424: use of closed network connection
E1025 09:35:07.646594 1 conn.go:339] Error on socket receive: read tcp 192.168.39.24:8443->192.168.39.1:56436: use of closed network connection
I1025 09:35:16.866241 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.131.135"}
I1025 09:35:42.038104 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1025 09:35:42.462875 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.212.9"}
E1025 09:35:47.684096 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
I1025 09:35:51.206727 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I1025 09:36:12.878481 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1025 09:36:12.878922 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1025 09:36:12.915968 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1025 09:36:12.918430 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1025 09:36:12.924894 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1025 09:36:12.926059 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1025 09:36:12.952448 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1025 09:36:12.952495 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1025 09:36:12.982104 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1025 09:36:12.982186 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1025 09:36:13.929832 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1025 09:36:13.982800 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
W1025 09:36:14.081901 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
I1025 09:36:34.896699 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1025 09:38:10.975987 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.153.4"}
==> kube-controller-manager [fb7d132b43254ef28561225f44344773ea7e974be75efa1028b7b242888bbe11] <==
I1025 09:36:21.241707 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1025 09:36:21.281125 1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
I1025 09:36:21.281189 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1025 09:36:23.483317 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 09:36:23.484180 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1025 09:36:29.376890 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 09:36:29.378125 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1025 09:36:31.154134 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 09:36:31.155375 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1025 09:36:32.747865 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 09:36:32.749028 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1025 09:36:49.997572 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 09:36:49.998594 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1025 09:36:54.172522 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 09:36:54.173531 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1025 09:36:57.398728 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 09:36:57.399664 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1025 09:37:32.321480 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 09:37:32.322429 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1025 09:37:42.479354 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 09:37:42.480356 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1025 09:37:44.389503 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 09:37:44.390494 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1025 09:38:04.418930 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1025 09:38:04.421099 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [f35bcf879471a5789c49c7b5e0aa6584b1d9d5d1d86a66e81c17d3c25c30f14a] <==
I1025 09:32:23.198569 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1025 09:32:23.300535 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1025 09:32:23.300574 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.24"]
E1025 09:32:23.300642 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1025 09:32:23.510102 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1025 09:32:23.510681 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1025 09:32:23.510731 1 server_linux.go:132] "Using iptables Proxier"
I1025 09:32:23.550076 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1025 09:32:23.551367 1 server.go:527] "Version info" version="v1.34.1"
I1025 09:32:23.551394 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1025 09:32:23.554937 1 config.go:309] "Starting node config controller"
I1025 09:32:23.556870 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1025 09:32:23.558066 1 config.go:403] "Starting serviceCIDR config controller"
I1025 09:32:23.558076 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1025 09:32:23.558101 1 config.go:200] "Starting service config controller"
I1025 09:32:23.558104 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1025 09:32:23.559114 1 config.go:106] "Starting endpoint slice config controller"
I1025 09:32:23.559124 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1025 09:32:23.658555 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1025 09:32:23.658621 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1025 09:32:23.658669 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1025 09:32:23.659750 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [08330f99452fd04c00d3ebf0081daaab6660dc3427391532830d3485ef7a6630] <==
E1025 09:32:13.740031 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1025 09:32:13.740521 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1025 09:32:13.740768 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1025 09:32:13.740852 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1025 09:32:13.740901 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1025 09:32:13.740935 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1025 09:32:13.740968 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1025 09:32:13.741026 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1025 09:32:13.741042 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1025 09:32:13.743513 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1025 09:32:13.743628 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1025 09:32:13.747518 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1025 09:32:13.747784 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1025 09:32:14.629398 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1025 09:32:14.639578 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1025 09:32:14.656118 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1025 09:32:14.778552 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1025 09:32:14.791890 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1025 09:32:14.895364 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1025 09:32:14.915733 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1025 09:32:14.923406 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1025 09:32:14.945146 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1025 09:32:14.974751 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1025 09:32:15.302165 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
I1025 09:32:17.725884 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Oct 25 09:36:37 addons-192357 kubelet[1506]: E1025 09:36:37.083143 1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761384997082750812 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598025} inodes_used:{value:201}}"
Oct 25 09:36:47 addons-192357 kubelet[1506]: E1025 09:36:47.085836 1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761385007085434606 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598025} inodes_used:{value:201}}"
Oct 25 09:36:47 addons-192357 kubelet[1506]: E1025 09:36:47.085859 1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761385007085434606 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598025} inodes_used:{value:201}}"
Oct 25 09:36:52 addons-192357 kubelet[1506]: I1025 09:36:52.911570 1506 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-78npv" secret="" err="secret \"gcp-auth\" not found"
Oct 25 09:36:57 addons-192357 kubelet[1506]: E1025 09:36:57.088556 1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761385017088094741 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598025} inodes_used:{value:201}}"
Oct 25 09:36:57 addons-192357 kubelet[1506]: E1025 09:36:57.088603 1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761385017088094741 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598025} inodes_used:{value:201}}"
Oct 25 09:37:03 addons-192357 kubelet[1506]: I1025 09:37:03.911560 1506 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-bmt9x" secret="" err="secret \"gcp-auth\" not found"
Oct 25 09:37:07 addons-192357 kubelet[1506]: E1025 09:37:07.091420 1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761385027090817375 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598025} inodes_used:{value:201}}"
Oct 25 09:37:07 addons-192357 kubelet[1506]: E1025 09:37:07.091470 1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761385027090817375 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598025} inodes_used:{value:201}}"
Oct 25 09:37:15 addons-192357 kubelet[1506]: I1025 09:37:15.910825 1506 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Oct 25 09:37:17 addons-192357 kubelet[1506]: E1025 09:37:17.095699 1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761385037095120422 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598025} inodes_used:{value:201}}"
Oct 25 09:37:17 addons-192357 kubelet[1506]: E1025 09:37:17.095739 1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761385037095120422 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598025} inodes_used:{value:201}}"
Oct 25 09:37:27 addons-192357 kubelet[1506]: E1025 09:37:27.098543 1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761385047097801912 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598025} inodes_used:{value:201}}"
Oct 25 09:37:27 addons-192357 kubelet[1506]: E1025 09:37:27.098594 1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761385047097801912 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598025} inodes_used:{value:201}}"
Oct 25 09:37:37 addons-192357 kubelet[1506]: E1025 09:37:37.101731 1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761385057101258028 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598025} inodes_used:{value:201}}"
Oct 25 09:37:37 addons-192357 kubelet[1506]: E1025 09:37:37.101756 1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761385057101258028 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598025} inodes_used:{value:201}}"
Oct 25 09:37:47 addons-192357 kubelet[1506]: E1025 09:37:47.104708 1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761385067104358940 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598025} inodes_used:{value:201}}"
Oct 25 09:37:47 addons-192357 kubelet[1506]: E1025 09:37:47.104734 1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761385067104358940 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598025} inodes_used:{value:201}}"
Oct 25 09:37:47 addons-192357 kubelet[1506]: I1025 09:37:47.911791 1506 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-wk7jc" secret="" err="secret \"gcp-auth\" not found"
Oct 25 09:37:57 addons-192357 kubelet[1506]: E1025 09:37:57.107215 1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761385077106873625 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598025} inodes_used:{value:201}}"
Oct 25 09:37:57 addons-192357 kubelet[1506]: E1025 09:37:57.107238 1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761385077106873625 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598025} inodes_used:{value:201}}"
Oct 25 09:38:07 addons-192357 kubelet[1506]: E1025 09:38:07.110274 1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761385087109892626 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598025} inodes_used:{value:201}}"
Oct 25 09:38:07 addons-192357 kubelet[1506]: E1025 09:38:07.110355 1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761385087109892626 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598025} inodes_used:{value:201}}"
Oct 25 09:38:08 addons-192357 kubelet[1506]: I1025 09:38:08.915507 1506 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-bmt9x" secret="" err="secret \"gcp-auth\" not found"
Oct 25 09:38:11 addons-192357 kubelet[1506]: I1025 09:38:11.024805 1506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c77rx\" (UniqueName: \"kubernetes.io/projected/50767ad2-4da5-4b4a-8c36-f22201cf9813-kube-api-access-c77rx\") pod \"hello-world-app-5d498dc89-9mrbg\" (UID: \"50767ad2-4da5-4b4a-8c36-f22201cf9813\") " pod="default/hello-world-app-5d498dc89-9mrbg"
==> storage-provisioner [6ec7df59340af4c1eaa07788351c871e4cecbfb0605d6fdf26c5560034b2d8bc] <==
W1025 09:37:48.230328 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:37:50.232851 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:37:50.237889 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:37:52.240944 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:37:52.248978 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:37:54.252686 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:37:54.257214 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:37:56.260269 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:37:56.266992 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:37:58.270256 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:37:58.275004 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:38:00.278914 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:38:00.285384 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:38:02.288415 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:38:02.292969 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:38:04.296595 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:38:04.303239 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:38:06.306459 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:38:06.310714 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:38:08.313754 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:38:08.320475 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:38:10.323736 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:38:10.328644 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:38:12.338464 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1025 09:38:12.343537 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-192357 -n addons-192357
helpers_test.go:269: (dbg) Run: kubectl --context addons-192357 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-9mrbg ingress-nginx-admission-create-6wz5v ingress-nginx-admission-patch-gvdhp
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context addons-192357 describe pod hello-world-app-5d498dc89-9mrbg ingress-nginx-admission-create-6wz5v ingress-nginx-admission-patch-gvdhp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-192357 describe pod hello-world-app-5d498dc89-9mrbg ingress-nginx-admission-create-6wz5v ingress-nginx-admission-patch-gvdhp: exit status 1 (73.099709ms)
-- stdout --
Name: hello-world-app-5d498dc89-9mrbg
Namespace: default
Priority: 0
Service Account: default
Node: addons-192357/192.168.39.24
Start Time: Sat, 25 Oct 2025 09:38:10 +0000
Labels: app=hello-world-app
pod-template-hash=5d498dc89
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-5d498dc89
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c77rx (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-c77rx:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3s default-scheduler Successfully assigned default/hello-world-app-5d498dc89-9mrbg to addons-192357
Normal Pulling 2s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-6wz5v" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-gvdhp" not found
** /stderr **
helpers_test.go:287: kubectl --context addons-192357 describe pod hello-world-app-5d498dc89-9mrbg ingress-nginx-admission-create-6wz5v ingress-nginx-admission-patch-gvdhp: exit status 1
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-192357 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-192357 addons disable ingress-dns --alsologtostderr -v=1: (1.431934602s)
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-192357 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-192357 addons disable ingress --alsologtostderr -v=1: (7.663179144s)
--- FAIL: TestAddons/parallel/Ingress (160.45s)