=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run: kubectl --context addons-468489 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run: kubectl --context addons-468489 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run: kubectl --context addons-468489 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [02be2896-2e22-4268-9b74-1264e195dc37] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [02be2896-2e22-4268-9b74-1264e195dc37] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.013745091s
I1101 08:32:50.734002 9793 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run: out/minikube-linux-amd64 -p addons-468489 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-468489 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.070451474s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run: kubectl --context addons-468489 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run: out/minikube-linux-amd64 -p addons-468489 ip
addons_test.go:299: (dbg) Run: nslookup hello-john.test 192.168.39.108
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-468489 -n addons-468489
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p addons-468489 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-468489 logs -n 25: (1.352153425s)
helpers_test.go:260: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-362299 │ download-only-362299 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
│ start │ --download-only -p binary-mirror-153470 --alsologtostderr --binary-mirror http://127.0.0.1:38639 --driver=kvm2 --container-runtime=crio │ binary-mirror-153470 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ │
│ delete │ -p binary-mirror-153470 │ binary-mirror-153470 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
│ addons │ disable dashboard -p addons-468489 │ addons-468489 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ │
│ addons │ enable dashboard -p addons-468489 │ addons-468489 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ │
│ start │ -p addons-468489 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-468489 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:32 UTC │
│ addons │ addons-468489 addons disable volcano --alsologtostderr -v=1 │ addons-468489 │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
│ addons │ addons-468489 addons disable gcp-auth --alsologtostderr -v=1 │ addons-468489 │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
│ addons │ enable headlamp -p addons-468489 --alsologtostderr -v=1 │ addons-468489 │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
│ addons │ addons-468489 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-468489 │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
│ addons │ addons-468489 addons disable metrics-server --alsologtostderr -v=1 │ addons-468489 │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
│ addons │ addons-468489 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-468489 │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
│ addons │ addons-468489 addons disable headlamp --alsologtostderr -v=1 │ addons-468489 │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
│ ip │ addons-468489 ip │ addons-468489 │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
│ addons │ addons-468489 addons disable registry --alsologtostderr -v=1 │ addons-468489 │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
│ addons │ addons-468489 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-468489 │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
│ ssh │ addons-468489 ssh cat /opt/local-path-provisioner/pvc-cd2a8e6f-0b78-44b3-86d7-51ee5b835709_default_test-pvc/file1 │ addons-468489 │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
│ addons │ addons-468489 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-468489 │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:33 UTC │
│ ssh │ addons-468489 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-468489 │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ │
│ addons │ addons-468489 addons disable yakd --alsologtostderr -v=1 │ addons-468489 │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:33 UTC │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-468489 │ addons-468489 │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │ 01 Nov 25 08:33 UTC │
│ addons │ addons-468489 addons disable registry-creds --alsologtostderr -v=1 │ addons-468489 │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │ 01 Nov 25 08:33 UTC │
│ addons │ addons-468489 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-468489 │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │ 01 Nov 25 08:33 UTC │
│ addons │ addons-468489 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-468489 │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │ 01 Nov 25 08:33 UTC │
│ ip │ addons-468489 ip │ addons-468489 │ jenkins │ v1.37.0 │ 01 Nov 25 08:35 UTC │ 01 Nov 25 08:35 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/11/01 08:29:20
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.24.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1101 08:29:20.286995 10392 out.go:360] Setting OutFile to fd 1 ...
I1101 08:29:20.287203 10392 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:29:20.287228 10392 out.go:374] Setting ErrFile to fd 2...
I1101 08:29:20.287232 10392 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:29:20.287423 10392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5912/.minikube/bin
I1101 08:29:20.287896 10392 out.go:368] Setting JSON to false
I1101 08:29:20.288665 10392 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":707,"bootTime":1761985053,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1101 08:29:20.288746 10392 start.go:143] virtualization: kvm guest
I1101 08:29:20.290763 10392 out.go:179] * [addons-468489] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1101 08:29:20.292291 10392 out.go:179] - MINIKUBE_LOCATION=21835
I1101 08:29:20.292293 10392 notify.go:221] Checking for updates...
I1101 08:29:20.293780 10392 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1101 08:29:20.295052 10392 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21835-5912/kubeconfig
I1101 08:29:20.296222 10392 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5912/.minikube
I1101 08:29:20.297429 10392 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1101 08:29:20.298659 10392 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1101 08:29:20.300089 10392 driver.go:422] Setting default libvirt URI to qemu:///system
I1101 08:29:20.330183 10392 out.go:179] * Using the kvm2 driver based on user configuration
I1101 08:29:20.331333 10392 start.go:309] selected driver: kvm2
I1101 08:29:20.331346 10392 start.go:930] validating driver "kvm2" against <nil>
I1101 08:29:20.331363 10392 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1101 08:29:20.332047 10392 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1101 08:29:20.332265 10392 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1101 08:29:20.332289 10392 cni.go:84] Creating CNI manager for ""
I1101 08:29:20.332327 10392 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1101 08:29:20.332333 10392 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1101 08:29:20.332367 10392 start.go:353] cluster config:
{Name:addons-468489 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-468489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1101 08:29:20.332451 10392 iso.go:125] acquiring lock: {Name:mk345092679db7c379cbaa00125c4f18e2b4a125 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1101 08:29:20.334503 10392 out.go:179] * Starting "addons-468489" primary control-plane node in "addons-468489" cluster
I1101 08:29:20.335721 10392 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1101 08:29:20.335760 10392 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
I1101 08:29:20.335767 10392 cache.go:59] Caching tarball of preloaded images
I1101 08:29:20.335862 10392 preload.go:233] Found /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1101 08:29:20.335877 10392 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
I1101 08:29:20.336180 10392 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/config.json ...
I1101 08:29:20.336202 10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/config.json: {Name:mk8aca735bb3c1afb644bd37d8f027126ddf2db3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 08:29:20.336369 10392 start.go:360] acquireMachinesLock for addons-468489: {Name:mk8049b4e421873947dfa0bcd96201ccb1e1825c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1101 08:29:20.336431 10392 start.go:364] duration metric: took 44.799µs to acquireMachinesLock for "addons-468489"
I1101 08:29:20.336456 10392 start.go:93] Provisioning new machine with config: &{Name:addons-468489 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-468489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1101 08:29:20.336509 10392 start.go:125] createHost starting for "" (driver="kvm2")
I1101 08:29:20.338850 10392 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1101 08:29:20.339032 10392 start.go:159] libmachine.API.Create for "addons-468489" (driver="kvm2")
I1101 08:29:20.339060 10392 client.go:173] LocalClient.Create starting
I1101 08:29:20.339154 10392 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem
I1101 08:29:20.480018 10392 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem
I1101 08:29:20.765632 10392 main.go:143] libmachine: creating domain...
I1101 08:29:20.765654 10392 main.go:143] libmachine: creating network...
I1101 08:29:20.767140 10392 main.go:143] libmachine: found existing default network
I1101 08:29:20.767388 10392 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1101 08:29:20.767960 10392 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d71bc0}
I1101 08:29:20.768068 10392 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-468489</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1101 08:29:20.775136 10392 main.go:143] libmachine: creating private network mk-addons-468489 192.168.39.0/24...
I1101 08:29:20.844129 10392 main.go:143] libmachine: private network mk-addons-468489 192.168.39.0/24 created
I1101 08:29:20.844470 10392 main.go:143] libmachine: <network>
<name>mk-addons-468489</name>
<uuid>2c1abea7-c4e7-4d53-b596-58a10f0d9c5f</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:9f:03:2c'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1101 08:29:20.844497 10392 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489 ...
I1101 08:29:20.844515 10392 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21835-5912/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
I1101 08:29:20.844525 10392 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21835-5912/.minikube
I1101 08:29:20.844597 10392 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21835-5912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21835-5912/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
I1101 08:29:21.104690 10392 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa...
I1101 08:29:21.132681 10392 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/addons-468489.rawdisk...
I1101 08:29:21.132719 10392 main.go:143] libmachine: Writing magic tar header
I1101 08:29:21.132751 10392 main.go:143] libmachine: Writing SSH key tar header
I1101 08:29:21.132825 10392 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489 ...
I1101 08:29:21.132888 10392 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489
I1101 08:29:21.132909 10392 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489 (perms=drwx------)
I1101 08:29:21.132918 10392 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21835-5912/.minikube/machines
I1101 08:29:21.132933 10392 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21835-5912/.minikube/machines (perms=drwxr-xr-x)
I1101 08:29:21.132943 10392 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21835-5912/.minikube
I1101 08:29:21.132954 10392 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21835-5912/.minikube (perms=drwxr-xr-x)
I1101 08:29:21.132962 10392 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21835-5912
I1101 08:29:21.132972 10392 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21835-5912 (perms=drwxrwxr-x)
I1101 08:29:21.132982 10392 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1101 08:29:21.132996 10392 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1101 08:29:21.133006 10392 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1101 08:29:21.133013 10392 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1101 08:29:21.133026 10392 main.go:143] libmachine: checking permissions on dir: /home
I1101 08:29:21.133042 10392 main.go:143] libmachine: skipping /home - not owner
I1101 08:29:21.133049 10392 main.go:143] libmachine: defining domain...
I1101 08:29:21.134135 10392 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-468489</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/addons-468489.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-468489'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1101 08:29:21.142082 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:22:5c:91 in network default
I1101 08:29:21.142637 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:21.142652 10392 main.go:143] libmachine: starting domain...
I1101 08:29:21.142657 10392 main.go:143] libmachine: ensuring networks are active...
I1101 08:29:21.143320 10392 main.go:143] libmachine: Ensuring network default is active
I1101 08:29:21.143666 10392 main.go:143] libmachine: Ensuring network mk-addons-468489 is active
I1101 08:29:21.144220 10392 main.go:143] libmachine: getting domain XML...
I1101 08:29:21.145281 10392 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-468489</name>
<uuid>83960230-6f48-4964-81c1-c1246eb542bd</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/addons-468489.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:d2:1b:e9'/>
<source network='mk-addons-468489'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:22:5c:91'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1101 08:29:22.435316 10392 main.go:143] libmachine: waiting for domain to start...
I1101 08:29:22.436615 10392 main.go:143] libmachine: domain is now running
I1101 08:29:22.436629 10392 main.go:143] libmachine: waiting for IP...
I1101 08:29:22.437441 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:22.437817 10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
I1101 08:29:22.437829 10392 main.go:143] libmachine: trying to list again with source=arp
I1101 08:29:22.438067 10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
I1101 08:29:22.438112 10392 retry.go:31] will retry after 230.239695ms: waiting for domain to come up
I1101 08:29:22.669584 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:22.670269 10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
I1101 08:29:22.670286 10392 main.go:143] libmachine: trying to list again with source=arp
I1101 08:29:22.670629 10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
I1101 08:29:22.670661 10392 retry.go:31] will retry after 360.113061ms: waiting for domain to come up
I1101 08:29:23.032146 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:23.032685 10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
I1101 08:29:23.032706 10392 main.go:143] libmachine: trying to list again with source=arp
I1101 08:29:23.032997 10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
I1101 08:29:23.033033 10392 retry.go:31] will retry after 478.271754ms: waiting for domain to come up
I1101 08:29:23.512730 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:23.513331 10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
I1101 08:29:23.513347 10392 main.go:143] libmachine: trying to list again with source=arp
I1101 08:29:23.513620 10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
I1101 08:29:23.513650 10392 retry.go:31] will retry after 510.18084ms: waiting for domain to come up
I1101 08:29:24.025380 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:24.026030 10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
I1101 08:29:24.026050 10392 main.go:143] libmachine: trying to list again with source=arp
I1101 08:29:24.026345 10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
I1101 08:29:24.026381 10392 retry.go:31] will retry after 643.490483ms: waiting for domain to come up
I1101 08:29:24.671129 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:24.671756 10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
I1101 08:29:24.671770 10392 main.go:143] libmachine: trying to list again with source=arp
I1101 08:29:24.672067 10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
I1101 08:29:24.672101 10392 retry.go:31] will retry after 894.911325ms: waiting for domain to come up
I1101 08:29:25.569148 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:25.569687 10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
I1101 08:29:25.569708 10392 main.go:143] libmachine: trying to list again with source=arp
I1101 08:29:25.569976 10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
I1101 08:29:25.570007 10392 retry.go:31] will retry after 937.8264ms: waiting for domain to come up
I1101 08:29:26.509104 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:26.509661 10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
I1101 08:29:26.509682 10392 main.go:143] libmachine: trying to list again with source=arp
I1101 08:29:26.509970 10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
I1101 08:29:26.510022 10392 retry.go:31] will retry after 1.30157764s: waiting for domain to come up
I1101 08:29:27.813547 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:27.814079 10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
I1101 08:29:27.814095 10392 main.go:143] libmachine: trying to list again with source=arp
I1101 08:29:27.814436 10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
I1101 08:29:27.814467 10392 retry.go:31] will retry after 1.622542541s: waiting for domain to come up
I1101 08:29:29.439367 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:29.439872 10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
I1101 08:29:29.439891 10392 main.go:143] libmachine: trying to list again with source=arp
I1101 08:29:29.440234 10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
I1101 08:29:29.440272 10392 retry.go:31] will retry after 2.021531153s: waiting for domain to come up
I1101 08:29:31.463955 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:31.464618 10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
I1101 08:29:31.464642 10392 main.go:143] libmachine: trying to list again with source=arp
I1101 08:29:31.465011 10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
I1101 08:29:31.465053 10392 retry.go:31] will retry after 2.339644955s: waiting for domain to come up
I1101 08:29:33.806067 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:33.806833 10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
I1101 08:29:33.806855 10392 main.go:143] libmachine: trying to list again with source=arp
I1101 08:29:33.807111 10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
I1101 08:29:33.807141 10392 retry.go:31] will retry after 3.305590216s: waiting for domain to come up
I1101 08:29:37.115736 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:37.116391 10392 main.go:143] libmachine: domain addons-468489 has current primary IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:37.116412 10392 main.go:143] libmachine: found domain IP: 192.168.39.108
I1101 08:29:37.116419 10392 main.go:143] libmachine: reserving static IP address...
I1101 08:29:37.116848 10392 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-468489", mac: "52:54:00:d2:1b:e9", ip: "192.168.39.108"} in network mk-addons-468489
I1101 08:29:37.313092 10392 main.go:143] libmachine: reserved static IP address 192.168.39.108 for domain addons-468489
I1101 08:29:37.313114 10392 main.go:143] libmachine: waiting for SSH...
I1101 08:29:37.313120 10392 main.go:143] libmachine: Getting to WaitForSSH function...
I1101 08:29:37.315925 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:37.316349 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d2:1b:e9}
I1101 08:29:37.316375 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:37.316562 10392 main.go:143] libmachine: Using SSH client type: native
I1101 08:29:37.316772 10392 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.108 22 <nil> <nil>}
I1101 08:29:37.316783 10392 main.go:143] libmachine: About to run SSH command:
exit 0
I1101 08:29:37.428897 10392 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1101 08:29:37.429266 10392 main.go:143] libmachine: domain creation complete
I1101 08:29:37.431023 10392 machine.go:94] provisionDockerMachine start ...
I1101 08:29:37.433509 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:37.433944 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:29:37.433967 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:37.434170 10392 main.go:143] libmachine: Using SSH client type: native
I1101 08:29:37.434417 10392 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.108 22 <nil> <nil>}
I1101 08:29:37.434433 10392 main.go:143] libmachine: About to run SSH command:
hostname
I1101 08:29:37.544818 10392 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1101 08:29:37.544847 10392 buildroot.go:166] provisioning hostname "addons-468489"
I1101 08:29:37.547777 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:37.548177 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:29:37.548220 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:37.548366 10392 main.go:143] libmachine: Using SSH client type: native
I1101 08:29:37.548544 10392 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.108 22 <nil> <nil>}
I1101 08:29:37.548555 10392 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-468489 && echo "addons-468489" | sudo tee /etc/hostname
I1101 08:29:37.675552 10392 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-468489
I1101 08:29:37.678466 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:37.678902 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:29:37.678947 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:37.679148 10392 main.go:143] libmachine: Using SSH client type: native
I1101 08:29:37.679400 10392 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.108 22 <nil> <nil>}
I1101 08:29:37.679422 10392 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-468489' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-468489/g' /etc/hosts;
else
echo '127.0.1.1 addons-468489' | sudo tee -a /etc/hosts;
fi
fi
I1101 08:29:37.798874 10392 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1101 08:29:37.798901 10392 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5912/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5912/.minikube}
I1101 08:29:37.798917 10392 buildroot.go:174] setting up certificates
I1101 08:29:37.798924 10392 provision.go:84] configureAuth start
I1101 08:29:37.801786 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:37.802256 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:29:37.802280 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:37.804669 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:37.805022 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:29:37.805045 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:37.805202 10392 provision.go:143] copyHostCerts
I1101 08:29:37.805291 10392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5912/.minikube/ca.pem (1082 bytes)
I1101 08:29:37.805432 10392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5912/.minikube/cert.pem (1123 bytes)
I1101 08:29:37.805695 10392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5912/.minikube/key.pem (1679 bytes)
I1101 08:29:37.805865 10392 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca-key.pem org=jenkins.addons-468489 san=[127.0.0.1 192.168.39.108 addons-468489 localhost minikube]
I1101 08:29:38.026554 10392 provision.go:177] copyRemoteCerts
I1101 08:29:38.026609 10392 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1101 08:29:38.029015 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:38.029389 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:29:38.029409 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:38.029539 10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
I1101 08:29:38.119168 10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1101 08:29:38.148612 10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1101 08:29:38.181367 10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1101 08:29:38.210051 10392 provision.go:87] duration metric: took 411.113175ms to configureAuth
I1101 08:29:38.210083 10392 buildroot.go:189] setting minikube options for container-runtime
I1101 08:29:38.210304 10392 config.go:182] Loaded profile config "addons-468489": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:29:38.212821 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:38.213190 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:29:38.213234 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:38.213409 10392 main.go:143] libmachine: Using SSH client type: native
I1101 08:29:38.213586 10392 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.108 22 <nil> <nil>}
I1101 08:29:38.213599 10392 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1101 08:29:38.462120 10392 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1101 08:29:38.462145 10392 machine.go:97] duration metric: took 1.031102449s to provisionDockerMachine
I1101 08:29:38.462154 10392 client.go:176] duration metric: took 18.123088465s to LocalClient.Create
I1101 08:29:38.462169 10392 start.go:167] duration metric: took 18.12313635s to libmachine.API.Create "addons-468489"
I1101 08:29:38.462175 10392 start.go:293] postStartSetup for "addons-468489" (driver="kvm2")
I1101 08:29:38.462184 10392 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1101 08:29:38.462270 10392 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1101 08:29:38.465106 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:38.465457 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:29:38.465479 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:38.465618 10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
I1101 08:29:38.550379 10392 ssh_runner.go:195] Run: cat /etc/os-release
I1101 08:29:38.555199 10392 info.go:137] Remote host: Buildroot 2025.02
I1101 08:29:38.555256 10392 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5912/.minikube/addons for local assets ...
I1101 08:29:38.555331 10392 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5912/.minikube/files for local assets ...
I1101 08:29:38.555367 10392 start.go:296] duration metric: took 93.187011ms for postStartSetup
I1101 08:29:38.558643 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:38.559097 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:29:38.559123 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:38.559387 10392 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/config.json ...
I1101 08:29:38.559588 10392 start.go:128] duration metric: took 18.223068675s to createHost
I1101 08:29:38.561668 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:38.562140 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:29:38.562163 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:38.562347 10392 main.go:143] libmachine: Using SSH client type: native
I1101 08:29:38.562552 10392 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.108 22 <nil> <nil>}
I1101 08:29:38.562566 10392 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1101 08:29:38.675444 10392 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761985778.638239831
I1101 08:29:38.675465 10392 fix.go:216] guest clock: 1761985778.638239831
I1101 08:29:38.675471 10392 fix.go:229] Guest: 2025-11-01 08:29:38.638239831 +0000 UTC Remote: 2025-11-01 08:29:38.559601036 +0000 UTC m=+18.319532512 (delta=78.638795ms)
I1101 08:29:38.675485 10392 fix.go:200] guest clock delta is within tolerance: 78.638795ms
I1101 08:29:38.675489 10392 start.go:83] releasing machines lock for "addons-468489", held for 18.339046917s
I1101 08:29:38.678475 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:38.678851 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:29:38.678874 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:38.679453 10392 ssh_runner.go:195] Run: cat /version.json
I1101 08:29:38.679525 10392 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1101 08:29:38.682468 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:38.682767 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:38.682885 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:29:38.682918 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:38.683055 10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
I1101 08:29:38.683303 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:29:38.683331 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:38.683507 10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
I1101 08:29:38.761498 10392 ssh_runner.go:195] Run: systemctl --version
I1101 08:29:38.790731 10392 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1101 08:29:38.947068 10392 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1101 08:29:38.954488 10392 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1101 08:29:38.954559 10392 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1101 08:29:38.975006 10392 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1101 08:29:38.975031 10392 start.go:496] detecting cgroup driver to use...
I1101 08:29:38.975097 10392 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1101 08:29:38.994654 10392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1101 08:29:39.011254 10392 docker.go:218] disabling cri-docker service (if available) ...
I1101 08:29:39.011312 10392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1101 08:29:39.029045 10392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1101 08:29:39.045408 10392 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1101 08:29:39.197939 10392 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1101 08:29:39.406576 10392 docker.go:234] disabling docker service ...
I1101 08:29:39.406644 10392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1101 08:29:39.422971 10392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1101 08:29:39.437865 10392 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1101 08:29:39.592931 10392 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1101 08:29:39.737448 10392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1101 08:29:39.752725 10392 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1101 08:29:39.775074 10392 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1101 08:29:39.775137 10392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1101 08:29:39.786920 10392 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1101 08:29:39.786976 10392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1101 08:29:39.798917 10392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1101 08:29:39.810958 10392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1101 08:29:39.823421 10392 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1101 08:29:39.836640 10392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1101 08:29:39.849068 10392 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1101 08:29:39.869819 10392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1101 08:29:39.882015 10392 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1101 08:29:39.892351 10392 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1101 08:29:39.892415 10392 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1101 08:29:39.912167 10392 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1101 08:29:39.923460 10392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 08:29:40.057456 10392 ssh_runner.go:195] Run: sudo systemctl restart crio
I1101 08:29:40.173283 10392 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1101 08:29:40.173371 10392 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1101 08:29:40.179118 10392 start.go:564] Will wait 60s for crictl version
I1101 08:29:40.179201 10392 ssh_runner.go:195] Run: which crictl
I1101 08:29:40.183607 10392 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1101 08:29:40.228592 10392 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1101 08:29:40.228701 10392 ssh_runner.go:195] Run: crio --version
I1101 08:29:40.257840 10392 ssh_runner.go:195] Run: crio --version
I1101 08:29:40.289257 10392 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
I1101 08:29:40.293356 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:40.293795 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:29:40.293822 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:29:40.294048 10392 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1101 08:29:40.299049 10392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1101 08:29:40.314560 10392 kubeadm.go:884] updating cluster {Name:addons-468489 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-468489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1101 08:29:40.314743 10392 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1101 08:29:40.314810 10392 ssh_runner.go:195] Run: sudo crictl images --output json
I1101 08:29:40.350006 10392 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
I1101 08:29:40.350083 10392 ssh_runner.go:195] Run: which lz4
I1101 08:29:40.354288 10392 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1101 08:29:40.359059 10392 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1101 08:29:40.359093 10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
I1101 08:29:41.695331 10392 crio.go:462] duration metric: took 1.34107457s to copy over tarball
I1101 08:29:41.695402 10392 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1101 08:29:43.298883 10392 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.603455135s)
I1101 08:29:43.298908 10392 crio.go:469] duration metric: took 1.603548837s to extract the tarball
I1101 08:29:43.298916 10392 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1101 08:29:43.339359 10392 ssh_runner.go:195] Run: sudo crictl images --output json
I1101 08:29:43.384229 10392 crio.go:514] all images are preloaded for cri-o runtime.
I1101 08:29:43.384249 10392 cache_images.go:86] Images are preloaded, skipping loading
I1101 08:29:43.384256 10392 kubeadm.go:935] updating node { 192.168.39.108 8443 v1.34.1 crio true true} ...
I1101 08:29:43.384330 10392 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-468489 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.108
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:addons-468489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1101 08:29:43.384389 10392 ssh_runner.go:195] Run: crio config
I1101 08:29:43.433182 10392 cni.go:84] Creating CNI manager for ""
I1101 08:29:43.433219 10392 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1101 08:29:43.433236 10392 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1101 08:29:43.433260 10392 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.108 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-468489 NodeName:addons-468489 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.108"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.108 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1101 08:29:43.433391 10392 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.108
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-468489"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.108"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.108"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1101 08:29:43.433459 10392 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1101 08:29:43.445703 10392 binaries.go:44] Found k8s binaries, skipping transfer
I1101 08:29:43.445772 10392 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1101 08:29:43.457247 10392 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I1101 08:29:43.478048 10392 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1101 08:29:43.498719 10392 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
I1101 08:29:43.519056 10392 ssh_runner.go:195] Run: grep 192.168.39.108 control-plane.minikube.internal$ /etc/hosts
I1101 08:29:43.523136 10392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.108 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1101 08:29:43.537796 10392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 08:29:43.679728 10392 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1101 08:29:43.699695 10392 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489 for IP: 192.168.39.108
I1101 08:29:43.699717 10392 certs.go:195] generating shared ca certs ...
I1101 08:29:43.699731 10392 certs.go:227] acquiring lock for ca certs: {Name:mk23a33d19209ad24f4406326ada43ab5cb57960 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 08:29:43.699863 10392 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-5912/.minikube/ca.key
I1101 08:29:43.978072 10392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt ...
I1101 08:29:43.978096 10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt: {Name:mk310d4ddeb698380ce931511e46a2949bc078d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 08:29:43.978262 10392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5912/.minikube/ca.key ...
I1101 08:29:43.978273 10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/ca.key: {Name:mk98b96a94ed9005e8095fef7c6d586931f7a99a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 08:29:43.978342 10392 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.key
I1101 08:29:44.369174 10392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.crt ...
I1101 08:29:44.369202 10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.crt: {Name:mk8f8f4e72899c75d3a00be809552850e4649e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 08:29:44.369365 10392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.key ...
I1101 08:29:44.369395 10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.key: {Name:mke12f3aff84934fd9656eefdf4c90c69a503a0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 08:29:44.369475 10392 certs.go:257] generating profile certs ...
I1101 08:29:44.369525 10392 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.key
I1101 08:29:44.369540 10392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt with IP's: []
I1101 08:29:44.567097 10392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt ...
I1101 08:29:44.567124 10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: {Name:mk6e6a0ab62c910983eeeceec962694b326a21fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 08:29:44.567280 10392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.key ...
I1101 08:29:44.567292 10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.key: {Name:mk74ca204ee8d1bdf9d5821b71407334c1b75417 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 08:29:44.567357 10392 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.key.82d38449
I1101 08:29:44.567375 10392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.crt.82d38449 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.108]
I1101 08:29:45.224964 10392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.crt.82d38449 ...
I1101 08:29:45.224993 10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.crt.82d38449: {Name:mkf4fb16c89192136e38e71006122bca1a9554cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 08:29:45.225153 10392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.key.82d38449 ...
I1101 08:29:45.225167 10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.key.82d38449: {Name:mk52cae8fb5fdc76f8f437013deea9cd816faf69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 08:29:45.225250 10392 certs.go:382] copying /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.crt.82d38449 -> /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.crt
I1101 08:29:45.225767 10392 certs.go:386] copying /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.key.82d38449 -> /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.key
I1101 08:29:45.225839 10392 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/proxy-client.key
I1101 08:29:45.225859 10392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/proxy-client.crt with IP's: []
I1101 08:29:45.835858 10392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/proxy-client.crt ...
I1101 08:29:45.835885 10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/proxy-client.crt: {Name:mke920dc8e6c8530147466fc91ae1c4a1614912c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 08:29:45.836045 10392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/proxy-client.key ...
I1101 08:29:45.836057 10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/proxy-client.key: {Name:mkd0b4181fb007ecb32bee7ac450c0b01527b072 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 08:29:45.836245 10392 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca-key.pem (1679 bytes)
I1101 08:29:45.836278 10392 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem (1082 bytes)
I1101 08:29:45.836297 10392 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem (1123 bytes)
I1101 08:29:45.836314 10392 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/key.pem (1679 bytes)
I1101 08:29:45.836835 10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1101 08:29:45.868558 10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1101 08:29:45.899360 10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1101 08:29:45.929990 10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1101 08:29:45.961324 10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1101 08:29:45.991070 10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1101 08:29:46.021154 10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1101 08:29:46.055649 10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1101 08:29:46.090762 10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1101 08:29:46.123025 10392 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1101 08:29:46.146229 10392 ssh_runner.go:195] Run: openssl version
I1101 08:29:46.153115 10392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1101 08:29:46.167493 10392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1101 08:29:46.172725 10392 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 1 08:29 /usr/share/ca-certificates/minikubeCA.pem
I1101 08:29:46.172798 10392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1101 08:29:46.180474 10392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1101 08:29:46.193728 10392 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1101 08:29:46.198816 10392 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1101 08:29:46.198876 10392 kubeadm.go:401] StartCluster: {Name:addons-468489 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-468489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1101 08:29:46.198953 10392 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1101 08:29:46.199114 10392 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1101 08:29:46.239692 10392 cri.go:89] found id: ""
I1101 08:29:46.239762 10392 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1101 08:29:46.254038 10392 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1101 08:29:46.266725 10392 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1101 08:29:46.287360 10392 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1101 08:29:46.287378 10392 kubeadm.go:158] found existing configuration files:
I1101 08:29:46.287443 10392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1101 08:29:46.299161 10392 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1101 08:29:46.299260 10392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1101 08:29:46.319559 10392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1101 08:29:46.331521 10392 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1101 08:29:46.331572 10392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1101 08:29:46.343586 10392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1101 08:29:46.354924 10392 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1101 08:29:46.354986 10392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1101 08:29:46.366696 10392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1101 08:29:46.377956 10392 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1101 08:29:46.378028 10392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1101 08:29:46.389964 10392 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1101 08:29:46.551041 10392 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1101 08:29:58.344700 10392 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
I1101 08:29:58.344770 10392 kubeadm.go:319] [preflight] Running pre-flight checks
I1101 08:29:58.344852 10392 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1101 08:29:58.344959 10392 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1101 08:29:58.345093 10392 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1101 08:29:58.345225 10392 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1101 08:29:58.347031 10392 out.go:252] - Generating certificates and keys ...
I1101 08:29:58.347147 10392 kubeadm.go:319] [certs] Using existing ca certificate authority
I1101 08:29:58.347329 10392 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1101 08:29:58.347442 10392 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1101 08:29:58.347512 10392 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1101 08:29:58.347581 10392 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1101 08:29:58.347661 10392 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1101 08:29:58.347711 10392 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1101 08:29:58.347872 10392 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-468489 localhost] and IPs [192.168.39.108 127.0.0.1 ::1]
I1101 08:29:58.347923 10392 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1101 08:29:58.348094 10392 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-468489 localhost] and IPs [192.168.39.108 127.0.0.1 ::1]
I1101 08:29:58.348191 10392 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1101 08:29:58.348291 10392 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1101 08:29:58.348338 10392 kubeadm.go:319] [certs] Generating "sa" key and public key
I1101 08:29:58.348417 10392 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1101 08:29:58.348485 10392 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1101 08:29:58.348570 10392 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1101 08:29:58.348656 10392 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1101 08:29:58.348773 10392 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1101 08:29:58.348854 10392 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1101 08:29:58.348969 10392 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1101 08:29:58.349095 10392 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1101 08:29:58.351501 10392 out.go:252] - Booting up control plane ...
I1101 08:29:58.351595 10392 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1101 08:29:58.351681 10392 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1101 08:29:58.351759 10392 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1101 08:29:58.351896 10392 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1101 08:29:58.352052 10392 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1101 08:29:58.352202 10392 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1101 08:29:58.352395 10392 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1101 08:29:58.352451 10392 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1101 08:29:58.352626 10392 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1101 08:29:58.352737 10392 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1101 08:29:58.352811 10392 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 524.422641ms
I1101 08:29:58.352889 10392 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1101 08:29:58.352982 10392 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.108:8443/livez
I1101 08:29:58.353111 10392 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1101 08:29:58.353242 10392 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1101 08:29:58.353359 10392 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.568998866s
I1101 08:29:58.353430 10392 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.72911267s
I1101 08:29:58.353502 10392 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502686455s
I1101 08:29:58.353657 10392 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1101 08:29:58.353842 10392 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1101 08:29:58.353937 10392 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1101 08:29:58.354136 10392 kubeadm.go:319] [mark-control-plane] Marking the node addons-468489 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1101 08:29:58.354251 10392 kubeadm.go:319] [bootstrap-token] Using token: 3eegde.22eo73t8801ax86h
I1101 08:29:58.356430 10392 out.go:252] - Configuring RBAC rules ...
I1101 08:29:58.356512 10392 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1101 08:29:58.356608 10392 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1101 08:29:58.356782 10392 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1101 08:29:58.356953 10392 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1101 08:29:58.357062 10392 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1101 08:29:58.357151 10392 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1101 08:29:58.357312 10392 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1101 08:29:58.357381 10392 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1101 08:29:58.357422 10392 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1101 08:29:58.357428 10392 kubeadm.go:319]
I1101 08:29:58.357474 10392 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1101 08:29:58.357480 10392 kubeadm.go:319]
I1101 08:29:58.357585 10392 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1101 08:29:58.357596 10392 kubeadm.go:319]
I1101 08:29:58.357635 10392 kubeadm.go:319] mkdir -p $HOME/.kube
I1101 08:29:58.357722 10392 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1101 08:29:58.357787 10392 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1101 08:29:58.357793 10392 kubeadm.go:319]
I1101 08:29:58.357834 10392 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1101 08:29:58.357839 10392 kubeadm.go:319]
I1101 08:29:58.357908 10392 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1101 08:29:58.357922 10392 kubeadm.go:319]
I1101 08:29:58.357996 10392 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1101 08:29:58.358099 10392 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1101 08:29:58.358192 10392 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1101 08:29:58.358200 10392 kubeadm.go:319]
I1101 08:29:58.358318 10392 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1101 08:29:58.358385 10392 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1101 08:29:58.358391 10392 kubeadm.go:319]
I1101 08:29:58.358454 10392 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3eegde.22eo73t8801ax86h \
I1101 08:29:58.358679 10392 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:a5abe2adb0c939d52fba184971121a4379087a8fcf67d55f536fc49608a1d330 \
I1101 08:29:58.358709 10392 kubeadm.go:319] --control-plane
I1101 08:29:58.358715 10392 kubeadm.go:319]
I1101 08:29:58.358853 10392 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1101 08:29:58.358864 10392 kubeadm.go:319]
I1101 08:29:58.358979 10392 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3eegde.22eo73t8801ax86h \
I1101 08:29:58.359139 10392 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:a5abe2adb0c939d52fba184971121a4379087a8fcf67d55f536fc49608a1d330
I1101 08:29:58.359168 10392 cni.go:84] Creating CNI manager for ""
I1101 08:29:58.359181 10392 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1101 08:29:58.360947 10392 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1101 08:29:58.362246 10392 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1101 08:29:58.375945 10392 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1101 08:29:58.401242 10392 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1101 08:29:58.401349 10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 08:29:58.401364 10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-468489 minikube.k8s.io/updated_at=2025_11_01T08_29_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192 minikube.k8s.io/name=addons-468489 minikube.k8s.io/primary=true
I1101 08:29:58.564646 10392 ops.go:34] apiserver oom_adj: -16
I1101 08:29:58.564756 10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 08:29:59.065168 10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 08:29:59.565725 10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 08:30:00.065630 10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 08:30:00.565574 10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 08:30:01.065765 10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 08:30:01.565866 10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 08:30:02.065122 10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 08:30:02.565602 10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 08:30:03.065166 10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 08:30:03.163895 10392 kubeadm.go:1114] duration metric: took 4.762618499s to wait for elevateKubeSystemPrivileges
I1101 08:30:03.163935 10392 kubeadm.go:403] duration metric: took 16.965062697s to StartCluster
I1101 08:30:03.163956 10392 settings.go:142] acquiring lock: {Name:mk818d33e162ca33774e3ab05f6aac30f8feaf64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 08:30:03.164097 10392 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21835-5912/kubeconfig
I1101 08:30:03.164629 10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/kubeconfig: {Name:mk599bec02e6b7062c3926243176124a4bc71dfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 08:30:03.164872 10392 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1101 08:30:03.164883 10392 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1101 08:30:03.164947 10392 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1101 08:30:03.165071 10392 addons.go:70] Setting yakd=true in profile "addons-468489"
I1101 08:30:03.165090 10392 config.go:182] Loaded profile config "addons-468489": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:30:03.165094 10392 addons.go:70] Setting ingress=true in profile "addons-468489"
I1101 08:30:03.165112 10392 addons.go:70] Setting ingress-dns=true in profile "addons-468489"
I1101 08:30:03.165092 10392 addons.go:239] Setting addon yakd=true in "addons-468489"
I1101 08:30:03.165132 10392 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-468489"
I1101 08:30:03.165161 10392 addons.go:70] Setting registry-creds=true in profile "addons-468489"
I1101 08:30:03.165162 10392 addons.go:70] Setting default-storageclass=true in profile "addons-468489"
I1101 08:30:03.165179 10392 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-468489"
I1101 08:30:03.165192 10392 addons.go:239] Setting addon ingress=true in "addons-468489"
I1101 08:30:03.165194 10392 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-468489"
I1101 08:30:03.165249 10392 host.go:66] Checking if "addons-468489" exists ...
I1101 08:30:03.165266 10392 addons.go:70] Setting metrics-server=true in profile "addons-468489"
I1101 08:30:03.165283 10392 addons.go:239] Setting addon metrics-server=true in "addons-468489"
I1101 08:30:03.165307 10392 host.go:66] Checking if "addons-468489" exists ...
I1101 08:30:03.165379 10392 addons.go:70] Setting inspektor-gadget=true in profile "addons-468489"
I1101 08:30:03.165403 10392 addons.go:239] Setting addon inspektor-gadget=true in "addons-468489"
I1101 08:30:03.165440 10392 host.go:66] Checking if "addons-468489" exists ...
I1101 08:30:03.165610 10392 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-468489"
I1101 08:30:03.165654 10392 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-468489"
I1101 08:30:03.165676 10392 host.go:66] Checking if "addons-468489" exists ...
I1101 08:30:03.165151 10392 host.go:66] Checking if "addons-468489" exists ...
I1101 08:30:03.165721 10392 addons.go:239] Setting addon registry-creds=true in "addons-468489"
I1101 08:30:03.165746 10392 host.go:66] Checking if "addons-468489" exists ...
I1101 08:30:03.166130 10392 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-468489"
I1101 08:30:03.166175 10392 host.go:66] Checking if "addons-468489" exists ...
I1101 08:30:03.165251 10392 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-468489"
I1101 08:30:03.166446 10392 addons.go:70] Setting gcp-auth=true in profile "addons-468489"
I1101 08:30:03.166461 10392 host.go:66] Checking if "addons-468489" exists ...
I1101 08:30:03.166467 10392 mustload.go:66] Loading cluster: addons-468489
I1101 08:30:03.166487 10392 addons.go:239] Setting addon ingress-dns=true in "addons-468489"
I1101 08:30:03.166534 10392 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-468489"
I1101 08:30:03.166551 10392 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-468489"
I1101 08:30:03.166579 10392 host.go:66] Checking if "addons-468489" exists ...
I1101 08:30:03.166656 10392 config.go:182] Loaded profile config "addons-468489": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:30:03.167166 10392 addons.go:70] Setting storage-provisioner=true in profile "addons-468489"
I1101 08:30:03.167323 10392 addons.go:239] Setting addon storage-provisioner=true in "addons-468489"
I1101 08:30:03.167354 10392 host.go:66] Checking if "addons-468489" exists ...
I1101 08:30:03.167196 10392 addons.go:70] Setting registry=true in profile "addons-468489"
I1101 08:30:03.167413 10392 addons.go:239] Setting addon registry=true in "addons-468489"
I1101 08:30:03.167434 10392 host.go:66] Checking if "addons-468489" exists ...
I1101 08:30:03.167229 10392 addons.go:70] Setting volcano=true in profile "addons-468489"
I1101 08:30:03.167501 10392 addons.go:239] Setting addon volcano=true in "addons-468489"
I1101 08:30:03.167547 10392 host.go:66] Checking if "addons-468489" exists ...
I1101 08:30:03.167239 10392 addons.go:70] Setting volumesnapshots=true in profile "addons-468489"
I1101 08:30:03.168066 10392 addons.go:239] Setting addon volumesnapshots=true in "addons-468489"
I1101 08:30:03.168092 10392 host.go:66] Checking if "addons-468489" exists ...
I1101 08:30:03.165140 10392 addons.go:70] Setting cloud-spanner=true in profile "addons-468489"
I1101 08:30:03.168350 10392 addons.go:239] Setting addon cloud-spanner=true in "addons-468489"
I1101 08:30:03.168382 10392 host.go:66] Checking if "addons-468489" exists ...
I1101 08:30:03.168513 10392 out.go:179] * Verifying Kubernetes components...
I1101 08:30:03.170001 10392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 08:30:03.174255 10392 addons.go:239] Setting addon default-storageclass=true in "addons-468489"
I1101 08:30:03.174300 10392 host.go:66] Checking if "addons-468489" exists ...
I1101 08:30:03.174978 10392 host.go:66] Checking if "addons-468489" exists ...
I1101 08:30:03.175135 10392 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-468489"
I1101 08:30:03.175165 10392 host.go:66] Checking if "addons-468489" exists ...
I1101 08:30:03.175387 10392 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1101 08:30:03.175450 10392 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
I1101 08:30:03.175461 10392 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
I1101 08:30:03.175464 10392 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1101 08:30:03.176330 10392 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1101 08:30:03.175498 10392 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1101 08:30:03.175633 10392 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
W1101 08:30:03.176083 10392 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1101 08:30:03.175481 10392 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
I1101 08:30:03.177077 10392 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1101 08:30:03.177559 10392 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1101 08:30:03.178045 10392 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
I1101 08:30:03.178067 10392 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
I1101 08:30:03.177469 10392 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1101 08:30:03.178101 10392 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1101 08:30:03.178856 10392 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1101 08:30:03.178870 10392 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1101 08:30:03.179305 10392 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1101 08:30:03.178914 10392 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1101 08:30:03.179459 10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1101 08:30:03.179700 10392 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1101 08:30:03.179703 10392 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
I1101 08:30:03.179746 10392 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
I1101 08:30:03.179719 10392 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1101 08:30:03.180226 10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1101 08:30:03.179719 10392 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1101 08:30:03.180327 10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1101 08:30:03.179765 10392 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1101 08:30:03.179770 10392 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1101 08:30:03.179792 10392 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1101 08:30:03.181690 10392 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1101 08:30:03.181744 10392 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1101 08:30:03.182259 10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1101 08:30:03.181783 10392 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1101 08:30:03.182342 10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1101 08:30:03.182536 10392 out.go:179] - Using image docker.io/registry:3.0.0
I1101 08:30:03.182592 10392 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1101 08:30:03.182937 10392 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1101 08:30:03.182806 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.183351 10392 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
I1101 08:30:03.183354 10392 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1101 08:30:03.183830 10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1101 08:30:03.184052 10392 out.go:179] - Using image docker.io/busybox:stable
I1101 08:30:03.184103 10392 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1101 08:30:03.184408 10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1101 08:30:03.184574 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:30:03.184604 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.184804 10392 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1101 08:30:03.184916 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.184956 10392 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1101 08:30:03.185148 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.185150 10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1101 08:30:03.185381 10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
I1101 08:30:03.185557 10392 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1101 08:30:03.185572 10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1101 08:30:03.186708 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:30:03.186726 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:30:03.186749 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.186748 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.187326 10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
I1101 08:30:03.187326 10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
I1101 08:30:03.187606 10392 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1101 08:30:03.188436 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.189256 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.189593 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.190101 10392 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1101 08:30:03.190237 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:30:03.190274 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.190682 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.190716 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:30:03.190748 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.191057 10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
I1101 08:30:03.191235 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:30:03.191270 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.191743 10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
I1101 08:30:03.192521 10392 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1101 08:30:03.192609 10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
I1101 08:30:03.192726 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:30:03.192754 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.193441 10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
I1101 08:30:03.193500 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.194226 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.194684 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:30:03.194709 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.194794 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.195019 10392 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1101 08:30:03.195020 10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
I1101 08:30:03.195235 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.195374 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:30:03.195394 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.195714 10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
I1101 08:30:03.196002 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.196132 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:30:03.196247 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:30:03.196278 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.196446 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.196251 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.196465 10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
I1101 08:30:03.196886 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:30:03.196916 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.197011 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.197178 10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
I1101 08:30:03.197549 10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
I1101 08:30:03.197736 10392 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1101 08:30:03.197780 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:30:03.197806 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:30:03.197807 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.197851 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.198030 10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
I1101 08:30:03.198173 10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
I1101 08:30:03.199257 10392 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1101 08:30:03.199280 10392 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1101 08:30:03.201991 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.202529 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:30:03.202565 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:03.202745 10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
W1101 08:30:03.391277 10392 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:59638->192.168.39.108:22: read: connection reset by peer
I1101 08:30:03.391316 10392 retry.go:31] will retry after 285.134778ms: ssh: handshake failed: read tcp 192.168.39.1:59638->192.168.39.108:22: read: connection reset by peer
W1101 08:30:03.434857 10392 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:59648->192.168.39.108:22: read: connection reset by peer
I1101 08:30:03.434884 10392 retry.go:31] will retry after 359.33267ms: ssh: handshake failed: read tcp 192.168.39.1:59648->192.168.39.108:22: read: connection reset by peer
W1101 08:30:03.434971 10392 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:59674->192.168.39.108:22: read: connection reset by peer
I1101 08:30:03.434984 10392 retry.go:31] will retry after 238.429211ms: ssh: handshake failed: read tcp 192.168.39.1:59674->192.168.39.108:22: read: connection reset by peer
W1101 08:30:03.435024 10392 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:59662->192.168.39.108:22: read: connection reset by peer
I1101 08:30:03.435065 10392 retry.go:31] will retry after 357.609129ms: ssh: handshake failed: read tcp 192.168.39.1:59662->192.168.39.108:22: read: connection reset by peer
I1101 08:30:03.706734 10392 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1101 08:30:03.706820 10392 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1101 08:30:04.042176 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1101 08:30:04.061546 10392 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1101 08:30:04.061573 10392 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1101 08:30:04.138326 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1101 08:30:04.167264 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1101 08:30:04.199045 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1101 08:30:04.230314 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1101 08:30:04.332010 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1101 08:30:04.366496 10392 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1101 08:30:04.366527 10392 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1101 08:30:04.388613 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1101 08:30:04.418567 10392 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1101 08:30:04.418588 10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1101 08:30:04.531931 10392 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1101 08:30:04.531953 10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1101 08:30:04.602372 10392 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1101 08:30:04.602396 10392 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1101 08:30:04.623278 10392 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1101 08:30:04.623298 10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1101 08:30:04.638958 10392 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1101 08:30:04.638986 10392 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1101 08:30:04.927745 10392 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1101 08:30:04.927769 10392 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1101 08:30:04.929411 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1101 08:30:05.007339 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1101 08:30:05.063646 10392 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1101 08:30:05.063674 10392 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1101 08:30:05.104918 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 08:30:05.176550 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1101 08:30:05.204105 10392 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1101 08:30:05.204141 10392 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1101 08:30:05.233124 10392 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1101 08:30:05.233154 10392 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1101 08:30:05.335011 10392 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1101 08:30:05.335037 10392 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1101 08:30:05.336054 10392 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1101 08:30:05.336076 10392 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1101 08:30:05.518105 10392 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1101 08:30:05.518133 10392 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1101 08:30:05.548240 10392 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1101 08:30:05.548273 10392 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1101 08:30:05.676546 10392 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1101 08:30:05.676575 10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1101 08:30:05.716559 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1101 08:30:05.775309 10392 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1101 08:30:05.775333 10392 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1101 08:30:05.841927 10392 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1101 08:30:05.841954 10392 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1101 08:30:06.040149 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1101 08:30:06.209743 10392 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1101 08:30:06.209762 10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1101 08:30:06.330963 10392 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1101 08:30:06.330995 10392 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1101 08:30:06.693643 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1101 08:30:06.743815 10392 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1101 08:30:06.743846 10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1101 08:30:07.159397 10392 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1101 08:30:07.159435 10392 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1101 08:30:07.171625 10392 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.464769922s)
I1101 08:30:07.171664 10392 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1101 08:30:07.171674 10392 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.464910165s)
I1101 08:30:07.171728 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.129522115s)
I1101 08:30:07.172532 10392 node_ready.go:35] waiting up to 6m0s for node "addons-468489" to be "Ready" ...
I1101 08:30:07.183789 10392 node_ready.go:49] node "addons-468489" is "Ready"
I1101 08:30:07.183821 10392 node_ready.go:38] duration metric: took 11.264748ms for node "addons-468489" to be "Ready" ...
I1101 08:30:07.183834 10392 api_server.go:52] waiting for apiserver process to appear ...
I1101 08:30:07.183888 10392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 08:30:07.626405 10392 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1101 08:30:07.626427 10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1101 08:30:07.680736 10392 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-468489" context rescaled to 1 replicas
I1101 08:30:07.975695 10392 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1101 08:30:07.975717 10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1101 08:30:08.316434 10392 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1101 08:30:08.316457 10392 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1101 08:30:08.580662 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1101 08:30:09.574961 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.407661305s)
I1101 08:30:09.574999 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.375928298s)
I1101 08:30:09.575055 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.344717575s)
I1101 08:30:09.575084 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.243046067s)
I1101 08:30:09.575154 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.18651527s)
I1101 08:30:09.575373 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.437015745s)
I1101 08:30:10.508042 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.578596704s)
I1101 08:30:10.620951 10392 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1101 08:30:10.623710 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:10.624167 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:30:10.624195 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:10.624418 10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
I1101 08:30:11.047958 10392 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1101 08:30:11.147489 10392 addons.go:239] Setting addon gcp-auth=true in "addons-468489"
I1101 08:30:11.147543 10392 host.go:66] Checking if "addons-468489" exists ...
I1101 08:30:11.149825 10392 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1101 08:30:11.152708 10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:11.153262 10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
I1101 08:30:11.153303 10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
I1101 08:30:11.153474 10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
I1101 08:30:12.408637 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.401262606s)
I1101 08:30:12.408670 10392 addons.go:480] Verifying addon ingress=true in "addons-468489"
I1101 08:30:12.408766 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.303806003s)
W1101 08:30:12.408813 10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget created
serviceaccount/gadget created
configmap/gadget created
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
role.rbac.authorization.k8s.io/gadget-role created
rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
daemonset.apps/gadget created
stderr:
Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:30:12.408842 10392 retry.go:31] will retry after 288.598901ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget created
serviceaccount/gadget created
configmap/gadget created
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
role.rbac.authorization.k8s.io/gadget-role created
rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
daemonset.apps/gadget created
stderr:
Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:30:12.408867 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.232232585s)
I1101 08:30:12.408888 10392 addons.go:480] Verifying addon registry=true in "addons-468489"
I1101 08:30:12.408933 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.692338295s)
I1101 08:30:12.408992 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.368816743s)
I1101 08:30:12.408997 10392 addons.go:480] Verifying addon metrics-server=true in "addons-468489"
I1101 08:30:12.410411 10392 out.go:179] * Verifying ingress addon...
I1101 08:30:12.411408 10392 out.go:179] * Verifying registry addon...
I1101 08:30:12.411414 10392 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-468489 service yakd-dashboard -n yakd-dashboard
I1101 08:30:12.412639 10392 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1101 08:30:12.413316 10392 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1101 08:30:12.503710 10392 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1101 08:30:12.503741 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:12.503722 10392 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1101 08:30:12.503759 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:12.621620 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.927935366s)
I1101 08:30:12.621669 10392 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.437759045s)
W1101 08:30:12.621674 10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1101 08:30:12.621695 10392 api_server.go:72] duration metric: took 9.456787261s to wait for apiserver process to appear ...
I1101 08:30:12.621699 10392 retry.go:31] will retry after 217.397158ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1101 08:30:12.621703 10392 api_server.go:88] waiting for apiserver healthz status ...
I1101 08:30:12.621723 10392 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
I1101 08:30:12.641132 10392 api_server.go:279] https://192.168.39.108:8443/healthz returned 200:
ok
I1101 08:30:12.642571 10392 api_server.go:141] control plane version: v1.34.1
I1101 08:30:12.642592 10392 api_server.go:131] duration metric: took 20.8825ms to wait for apiserver health ...
I1101 08:30:12.642600 10392 system_pods.go:43] waiting for kube-system pods to appear ...
I1101 08:30:12.657340 10392 system_pods.go:59] 15 kube-system pods found
I1101 08:30:12.657381 10392 system_pods.go:61] "amd-gpu-device-plugin-wx8s2" [81d7a980-35fc-40ae-a47f-4be99c0b6c65] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1101 08:30:12.657393 10392 system_pods.go:61] "coredns-66bc5c9577-ms7np" [d9442c37-8e1e-4201-9f54-a883e9756f4c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1101 08:30:12.657404 10392 system_pods.go:61] "coredns-66bc5c9577-sjgmx" [66422fdc-0c8f-4909-b971-478ee3ec6443] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1101 08:30:12.657410 10392 system_pods.go:61] "etcd-addons-468489" [264ceabd-3ca1-4077-89d1-f38eb22dffa5] Running
I1101 08:30:12.657417 10392 system_pods.go:61] "kube-apiserver-addons-468489" [0f311ff5-25a6-4ac0-b279-0a23db6667f7] Running
I1101 08:30:12.657426 10392 system_pods.go:61] "kube-controller-manager-addons-468489" [f14d55ce-f86e-497f-ad0d-8080ce321467] Running
I1101 08:30:12.657433 10392 system_pods.go:61] "kube-ingress-dns-minikube" [36080b1f-6e52-4871-bf53-646c532b90bb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1101 08:30:12.657441 10392 system_pods.go:61] "kube-proxy-d6zrs" [476d893f-eeca-41a3-aa64-4f3340875cdf] Running
I1101 08:30:12.657445 10392 system_pods.go:61] "kube-scheduler-addons-468489" [7b378d38-fbcf-4987-b14d-3aa3c65a78de] Running
I1101 08:30:12.657450 10392 system_pods.go:61] "metrics-server-85b7d694d7-fq64r" [fa41a986-93b3-4aff-bb56-494cf440e1f9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1101 08:30:12.657458 10392 system_pods.go:61] "nvidia-device-plugin-daemonset-f2qxl" [ec4ee384-540b-4a75-84b3-4e570d3d9f23] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1101 08:30:12.657470 10392 system_pods.go:61] "registry-6b586f9694-xfrhn" [f3392fde-46f3-42dc-832d-20224c4f0549] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1101 08:30:12.657478 10392 system_pods.go:61] "registry-creds-764b6fb674-kv2dx" [50f610f4-b848-4266-a771-a9ad1114d203] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1101 08:30:12.657487 10392 system_pods.go:61] "registry-proxy-rhvsz" [55e49aa2-d062-47e2-8c75-d338178ea4a8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1101 08:30:12.657494 10392 system_pods.go:61] "storage-provisioner" [4b0ce500-deaa-4b2b-9613-8479f762e6b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1101 08:30:12.657505 10392 system_pods.go:74] duration metric: took 14.89888ms to wait for pod list to return data ...
I1101 08:30:12.657519 10392 default_sa.go:34] waiting for default service account to be created ...
I1101 08:30:12.698016 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 08:30:12.704231 10392 default_sa.go:45] found service account: "default"
I1101 08:30:12.704250 10392 default_sa.go:55] duration metric: took 46.725168ms for default service account to be created ...
I1101 08:30:12.704262 10392 system_pods.go:116] waiting for k8s-apps to be running ...
I1101 08:30:12.755551 10392 system_pods.go:86] 17 kube-system pods found
I1101 08:30:12.755589 10392 system_pods.go:89] "amd-gpu-device-plugin-wx8s2" [81d7a980-35fc-40ae-a47f-4be99c0b6c65] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1101 08:30:12.755600 10392 system_pods.go:89] "coredns-66bc5c9577-ms7np" [d9442c37-8e1e-4201-9f54-a883e9756f4c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1101 08:30:12.755614 10392 system_pods.go:89] "coredns-66bc5c9577-sjgmx" [66422fdc-0c8f-4909-b971-478ee3ec6443] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1101 08:30:12.755623 10392 system_pods.go:89] "etcd-addons-468489" [264ceabd-3ca1-4077-89d1-f38eb22dffa5] Running
I1101 08:30:12.755633 10392 system_pods.go:89] "kube-apiserver-addons-468489" [0f311ff5-25a6-4ac0-b279-0a23db6667f7] Running
I1101 08:30:12.755639 10392 system_pods.go:89] "kube-controller-manager-addons-468489" [f14d55ce-f86e-497f-ad0d-8080ce321467] Running
I1101 08:30:12.755647 10392 system_pods.go:89] "kube-ingress-dns-minikube" [36080b1f-6e52-4871-bf53-646c532b90bb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1101 08:30:12.755651 10392 system_pods.go:89] "kube-proxy-d6zrs" [476d893f-eeca-41a3-aa64-4f3340875cdf] Running
I1101 08:30:12.755657 10392 system_pods.go:89] "kube-scheduler-addons-468489" [7b378d38-fbcf-4987-b14d-3aa3c65a78de] Running
I1101 08:30:12.755668 10392 system_pods.go:89] "metrics-server-85b7d694d7-fq64r" [fa41a986-93b3-4aff-bb56-494cf440e1f9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1101 08:30:12.755676 10392 system_pods.go:89] "nvidia-device-plugin-daemonset-f2qxl" [ec4ee384-540b-4a75-84b3-4e570d3d9f23] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1101 08:30:12.755685 10392 system_pods.go:89] "registry-6b586f9694-xfrhn" [f3392fde-46f3-42dc-832d-20224c4f0549] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1101 08:30:12.755693 10392 system_pods.go:89] "registry-creds-764b6fb674-kv2dx" [50f610f4-b848-4266-a771-a9ad1114d203] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1101 08:30:12.755701 10392 system_pods.go:89] "registry-proxy-rhvsz" [55e49aa2-d062-47e2-8c75-d338178ea4a8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1101 08:30:12.755707 10392 system_pods.go:89] "snapshot-controller-7d9fbc56b8-79mgm" [5a850e29-a396-460b-9e3a-b1253224ae87] Pending
I1101 08:30:12.755715 10392 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p4lmm" [a0898425-e644-493d-a304-1fb4bcba103b] Pending
I1101 08:30:12.755722 10392 system_pods.go:89] "storage-provisioner" [4b0ce500-deaa-4b2b-9613-8479f762e6b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1101 08:30:12.755730 10392 system_pods.go:126] duration metric: took 51.463181ms to wait for k8s-apps to be running ...
I1101 08:30:12.755740 10392 system_svc.go:44] waiting for kubelet service to be running ....
I1101 08:30:12.755808 10392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1101 08:30:12.840103 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1101 08:30:12.925433 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:12.928880 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:13.427147 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:13.429057 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:13.953781 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:13.958543 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:14.011569 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.430862697s)
I1101 08:30:14.011603 10392 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-468489"
I1101 08:30:14.011650 10392 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.861791562s)
I1101 08:30:14.013557 10392 out.go:179] * Verifying csi-hostpath-driver addon...
I1101 08:30:14.013566 10392 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
I1101 08:30:14.015062 10392 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1101 08:30:14.015584 10392 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1101 08:30:14.016600 10392 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1101 08:30:14.016616 10392 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1101 08:30:14.024455 10392 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1101 08:30:14.024472 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:14.181854 10392 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1101 08:30:14.181883 10392 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1101 08:30:14.321285 10392 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1101 08:30:14.321317 10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1101 08:30:14.427287 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:14.427312 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:14.483889 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1101 08:30:14.525108 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:14.918308 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:14.918784 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:15.019838 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:15.419098 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:15.419232 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:15.519747 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:15.922265 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:15.922649 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:16.038244 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:16.164717 10392 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.408881174s)
I1101 08:30:16.164759 10392 system_svc.go:56] duration metric: took 3.409014547s WaitForService to wait for kubelet
I1101 08:30:16.164772 10392 kubeadm.go:587] duration metric: took 12.999862562s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1101 08:30:16.164797 10392 node_conditions.go:102] verifying NodePressure condition ...
I1101 08:30:16.164859 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.324712874s)
I1101 08:30:16.165947 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.46789387s)
W1101 08:30:16.165980 10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:30:16.165999 10392 retry.go:31] will retry after 542.604514ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:30:16.181086 10392 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1101 08:30:16.181114 10392 node_conditions.go:123] node cpu capacity is 2
I1101 08:30:16.181127 10392 node_conditions.go:105] duration metric: took 16.322752ms to run NodePressure ...
I1101 08:30:16.181141 10392 start.go:242] waiting for startup goroutines ...
I1101 08:30:16.517975 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:16.527560 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:16.580333 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.096403789s)
I1101 08:30:16.581452 10392 addons.go:480] Verifying addon gcp-auth=true in "addons-468489"
I1101 08:30:16.583367 10392 out.go:179] * Verifying gcp-auth addon...
I1101 08:30:16.585595 10392 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1101 08:30:16.607796 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:16.615190 10392 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1101 08:30:16.615219 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:16.709454 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 08:30:16.919591 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:16.922056 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:17.027344 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:17.093029 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:17.421113 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:17.421311 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:17.522202 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:17.591344 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:17.921720 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:17.922261 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:18.020928 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:18.089681 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:18.096659 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.387169921s)
W1101 08:30:18.096691 10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:30:18.096708 10392 retry.go:31] will retry after 393.17056ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:30:18.421778 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:18.422684 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:18.490844 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 08:30:18.522178 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:18.592305 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:18.918881 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:18.922142 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:19.019524 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:19.090800 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:19.423633 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:19.423719 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:19.521303 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:19.586618 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.095737641s)
W1101 08:30:19.586655 10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:30:19.586675 10392 retry.go:31] will retry after 1.214746941s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:30:19.589134 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:19.918938 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:19.920707 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:20.019762 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:20.090329 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:20.416683 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:20.418154 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:20.522059 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:20.589941 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:20.802301 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 08:30:20.918468 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:20.922497 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:21.026120 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:21.094531 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:21.417174 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:21.420657 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:21.521697 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:21.595169 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:21.806494 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.004156159s)
W1101 08:30:21.806525 10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:30:21.806541 10392 retry.go:31] will retry after 1.026170972s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:30:21.918142 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:21.918236 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:22.020182 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:22.089373 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:22.419502 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:22.420477 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:22.520201 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:22.590080 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:22.833451 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 08:30:22.916821 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:22.918179 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:23.022665 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:23.089467 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:23.417348 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:23.417389 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:23.519302 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
W1101 08:30:23.540890 10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:30:23.540918 10392 retry.go:31] will retry after 1.13933478s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:30:23.590615 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:23.919332 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:23.921317 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:24.021511 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:24.092057 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:24.420242 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:24.422658 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:24.519220 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:24.589781 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:24.680947 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 08:30:24.917442 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:24.922442 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:25.021628 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:25.091072 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:25.418545 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:25.421872 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:25.520640 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:25.590720 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:25.845301 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.16430914s)
W1101 08:30:25.845349 10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:30:25.845368 10392 retry.go:31] will retry after 3.96310162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:30:25.921607 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:25.921649 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:26.019141 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:26.090061 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:26.418594 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:26.419127 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:26.519948 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:26.588684 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:26.919894 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:26.922829 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:27.020550 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:27.089908 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:27.417252 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:27.417814 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:27.521388 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:27.589099 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:27.919396 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:27.919539 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:28.020522 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:28.121101 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:28.418698 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:28.421501 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:28.520688 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:28.590420 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:28.917047 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:28.917574 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:29.021383 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:29.088755 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:29.422862 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:29.422901 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:29.519482 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:29.589348 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:29.808598 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 08:30:29.920469 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:29.920955 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:30.023592 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:30.089386 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:30.417062 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:30.421598 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:30.522898 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:30.588790 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:30.920260 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:30.920720 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:30.933041 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.124408751s)
W1101 08:30:30.933084 10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:30:30.933105 10392 retry.go:31] will retry after 5.481687476s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:30:31.020957 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:31.090737 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:31.418029 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:31.418038 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:31.519809 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:31.590483 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:31.919481 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:31.919666 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:32.020231 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:32.089015 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:32.423724 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:32.424114 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:32.521755 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:32.589794 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:32.916498 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:32.916609 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:33.019972 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:33.090813 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:33.419759 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:33.422047 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:33.521188 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:33.591025 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:33.926737 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:33.928067 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:34.019566 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:34.090038 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:34.418505 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:34.419727 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:34.521252 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:34.591582 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:34.916268 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:34.919883 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:35.021167 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:35.089327 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:35.416145 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:35.421286 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:35.525977 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:35.592261 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:35.998513 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:36.001248 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:36.023437 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:36.097712 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:36.415306 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 08:30:36.418799 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:36.427136 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:36.521792 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:36.589075 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:36.918552 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:36.919948 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:37.143415 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:37.145063 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:37.420055 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:37.421857 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:37.524770 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:37.590063 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:37.646763 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.231415583s)
W1101 08:30:37.646807 10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:30:37.646831 10392 retry.go:31] will retry after 5.025033795s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:30:37.916790 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:37.919516 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:38.127633 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:38.131524 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:38.418999 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:38.420016 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:38.519313 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:38.590063 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:38.916193 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:38.917179 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:39.019750 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:39.088877 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:39.417251 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:39.417889 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:39.519139 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:39.589109 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:39.917230 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:39.917415 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:40.019718 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:40.088508 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:40.420635 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:40.420774 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:40.520338 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:40.590524 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:40.917851 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:40.917978 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:41.022828 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:41.088912 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:41.417017 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:41.418828 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:41.519781 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:41.590160 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:41.917489 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:41.917618 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:42.021115 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:42.089805 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:42.419348 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:42.419429 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:42.519560 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:42.591927 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:42.673089 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 08:30:42.995204 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:42.999753 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:43.020682 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:43.090517 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:43.419475 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:43.419833 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:43.519931 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:43.589653 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:43.708353 10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.035222084s)
W1101 08:30:43.708390 10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:30:43.708406 10392 retry.go:31] will retry after 12.909151826s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:30:43.919604 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:43.921302 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:44.020371 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:44.089479 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:44.418541 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:44.419533 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:44.519724 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:44.588645 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:44.919796 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:44.920762 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:45.020316 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:45.089149 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:45.420554 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:45.421031 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:45.523134 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:45.591903 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:45.922801 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:45.929079 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:46.024486 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:46.090368 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:46.417598 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:46.418296 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:46.522704 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:46.589755 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:46.917297 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:46.919766 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:47.020277 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:47.089472 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:47.417430 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:47.417584 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 08:30:47.519146 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:47.589548 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:47.919717 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:47.919898 10392 kapi.go:107] duration metric: took 35.506582059s to wait for kubernetes.io/minikube-addons=registry ...
I1101 08:30:48.019665 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:48.090391 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:48.417512 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:48.519656 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:48.588956 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:48.917653 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:49.020172 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:49.090533 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:49.418634 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:49.521351 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:49.590573 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:49.916932 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:50.023166 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:50.089650 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:50.474183 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:50.523407 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:50.590015 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:50.916190 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:51.020176 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:51.088989 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:51.416685 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:51.519708 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:51.589260 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:51.917074 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:52.021703 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:52.092682 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:52.417663 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:52.532452 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:52.591689 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:52.917774 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:53.023675 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:53.089486 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:53.416594 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:53.519397 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:53.589294 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:53.922137 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:54.022219 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:54.089299 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:54.416490 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:54.520797 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:54.589483 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:54.916254 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:55.019416 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:55.089279 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:55.418699 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:55.521771 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:55.590070 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:55.918018 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:56.025849 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:56.091126 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:56.418476 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:56.520382 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:56.590299 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:56.618451 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 08:30:56.918310 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:57.021946 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:57.090943 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:57.420361 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:57.520065 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:57.591085 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
W1101 08:30:57.617632 10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:30:57.617657 10392 retry.go:31] will retry after 10.651159929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:30:57.916015 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:58.019562 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:58.090239 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:58.416910 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:58.519503 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:58.591471 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:58.916375 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:59.023969 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:59.089071 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:59.417961 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:30:59.519122 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:30:59.591321 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:30:59.917504 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:00.020248 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:00.089162 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:00.416900 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:00.520448 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:00.590850 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:00.916129 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:01.022270 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:01.090323 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:01.417636 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:01.520753 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:01.590877 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:01.917174 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:02.021562 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:02.092549 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:02.417520 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:02.520436 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:02.591323 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:02.917923 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:03.019980 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:03.090019 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:03.417173 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:03.520228 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:03.589746 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:03.917717 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:04.020877 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:04.088765 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:04.419471 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:04.524278 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:04.594734 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:04.916135 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:05.021832 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:05.127161 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:05.421518 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:05.520095 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:05.588850 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:05.917561 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:06.034402 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:06.458707 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:06.488355 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:06.572404 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:06.595902 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:06.917273 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:07.024838 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:07.089032 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:07.417280 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:07.520270 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:07.592695 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:07.921152 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:08.019668 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:08.090085 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:08.269341 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 08:31:08.420123 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:08.524518 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:08.622450 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:08.920655 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:09.020533 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:09.089928 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
W1101 08:31:09.155543 10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:31:09.155586 10392 retry.go:31] will retry after 26.236601913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:31:09.419702 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:09.519993 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:09.590891 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:09.916975 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:10.025409 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:10.125561 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:10.416907 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:10.519647 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:10.591199 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:10.918878 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:11.020608 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:11.091111 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:11.417383 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:11.524179 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:11.622482 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:11.915845 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:12.020457 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:12.090934 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:12.417668 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:12.520343 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:12.589896 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:12.918173 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:13.020032 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:13.089621 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:13.416806 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:13.519633 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:13.592374 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:14.191421 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:14.231982 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:14.233617 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:14.417666 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:14.520902 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:14.589187 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:14.919389 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:15.019947 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:15.089813 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:15.420272 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:15.521158 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:15.589626 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:15.916179 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:16.020699 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:16.089914 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:16.419192 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:16.522740 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:16.589928 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:16.919772 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:17.022143 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:17.091646 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:17.417909 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:17.520226 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:17.590145 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:17.921944 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:18.018671 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:18.089728 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:18.418328 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:18.524732 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:18.590416 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:18.918480 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:19.020165 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 08:31:19.092887 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:19.420855 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:19.716075 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:19.716251 10392 kapi.go:107] duration metric: took 1m5.700661688s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1101 08:31:19.918980 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:20.093482 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:20.419301 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:20.589039 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:20.918988 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:21.089955 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:21.420019 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:21.588793 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:21.916608 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:22.091234 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:22.420172 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:22.593020 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:23.001654 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:23.091242 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:23.416920 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:23.588896 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:24.051193 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:24.089164 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:24.416680 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:24.590984 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:24.920948 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:25.089963 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:25.417063 10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 08:31:25.589667 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:25.916492 10392 kapi.go:107] duration metric: took 1m13.503852258s to wait for app.kubernetes.io/name=ingress-nginx ...
I1101 08:31:26.090136 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:26.589052 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:27.097000 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:27.591787 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:28.092228 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:28.589623 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:29.090157 10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 08:31:29.589358 10392 kapi.go:107] duration metric: took 1m13.00376217s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1101 08:31:29.591226 10392 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-468489 cluster.
I1101 08:31:29.592684 10392 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1101 08:31:29.594034 10392 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1101 08:31:35.392803 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
W1101 08:31:36.104186 10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:31:36.104242 10392 retry.go:31] will retry after 24.243133996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 08:32:00.348418 10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
W1101 08:32:01.045944 10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
W1101 08:32:01.046043 10392 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
]
I1101 08:32:01.047841 10392 out.go:179] * Enabled addons: amd-gpu-device-plugin, registry-creds, cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner, default-storageclass, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I1101 08:32:01.049348 10392 addons.go:515] duration metric: took 1m57.884408151s for enable addons: enabled=[amd-gpu-device-plugin registry-creds cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner default-storageclass storage-provisioner-rancher metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I1101 08:32:01.049387 10392 start.go:247] waiting for cluster config update ...
I1101 08:32:01.049411 10392 start.go:256] writing updated cluster config ...
I1101 08:32:01.049638 10392 ssh_runner.go:195] Run: rm -f paused
I1101 08:32:01.055666 10392 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1101 08:32:01.059389 10392 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sjgmx" in "kube-system" namespace to be "Ready" or be gone ...
I1101 08:32:01.064789 10392 pod_ready.go:94] pod "coredns-66bc5c9577-sjgmx" is "Ready"
I1101 08:32:01.064809 10392 pod_ready.go:86] duration metric: took 5.402573ms for pod "coredns-66bc5c9577-sjgmx" in "kube-system" namespace to be "Ready" or be gone ...
I1101 08:32:01.067104 10392 pod_ready.go:83] waiting for pod "etcd-addons-468489" in "kube-system" namespace to be "Ready" or be gone ...
I1101 08:32:01.072548 10392 pod_ready.go:94] pod "etcd-addons-468489" is "Ready"
I1101 08:32:01.072564 10392 pod_ready.go:86] duration metric: took 5.445456ms for pod "etcd-addons-468489" in "kube-system" namespace to be "Ready" or be gone ...
I1101 08:32:01.075233 10392 pod_ready.go:83] waiting for pod "kube-apiserver-addons-468489" in "kube-system" namespace to be "Ready" or be gone ...
I1101 08:32:01.079928 10392 pod_ready.go:94] pod "kube-apiserver-addons-468489" is "Ready"
I1101 08:32:01.079946 10392 pod_ready.go:86] duration metric: took 4.697885ms for pod "kube-apiserver-addons-468489" in "kube-system" namespace to be "Ready" or be gone ...
I1101 08:32:01.082185 10392 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-468489" in "kube-system" namespace to be "Ready" or be gone ...
I1101 08:32:01.460536 10392 pod_ready.go:94] pod "kube-controller-manager-addons-468489" is "Ready"
I1101 08:32:01.460568 10392 pod_ready.go:86] duration metric: took 378.366246ms for pod "kube-controller-manager-addons-468489" in "kube-system" namespace to be "Ready" or be gone ...
I1101 08:32:01.661787 10392 pod_ready.go:83] waiting for pod "kube-proxy-d6zrs" in "kube-system" namespace to be "Ready" or be gone ...
I1101 08:32:02.061035 10392 pod_ready.go:94] pod "kube-proxy-d6zrs" is "Ready"
I1101 08:32:02.061062 10392 pod_ready.go:86] duration metric: took 399.253022ms for pod "kube-proxy-d6zrs" in "kube-system" namespace to be "Ready" or be gone ...
I1101 08:32:02.259853 10392 pod_ready.go:83] waiting for pod "kube-scheduler-addons-468489" in "kube-system" namespace to be "Ready" or be gone ...
I1101 08:32:02.660961 10392 pod_ready.go:94] pod "kube-scheduler-addons-468489" is "Ready"
I1101 08:32:02.660985 10392 pod_ready.go:86] duration metric: took 401.111669ms for pod "kube-scheduler-addons-468489" in "kube-system" namespace to be "Ready" or be gone ...
I1101 08:32:02.660996 10392 pod_ready.go:40] duration metric: took 1.605305871s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1101 08:32:02.703824 10392 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
I1101 08:32:02.705894 10392 out.go:179] * Done! kubectl is now configured to use "addons-468489" cluster and "default" namespace by default
==> CRI-O <==
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.156648210Z" level=debug msg="Received container exit code: 0, message: " file="oci/runtime_oci.go:670" id=9043388d-cdd0-4ee3-9248-f1dd91a81ac0 name=/runtime.v1.RuntimeService/ExecSync
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.156822520Z" level=debug msg="Response: &ExecSyncResponse{Stdout:[FILTERED],Stderr:[],ExitCode:0,}" file="otel-collector/interceptors.go:74" id=9043388d-cdd0-4ee3-9248-f1dd91a81ac0 name=/runtime.v1.RuntimeService/ExecSync
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.172948380Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6992720f-595d-41af-8143-a5ea77d6e484 name=/runtime.v1.RuntimeService/Version
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.173314779Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6992720f-595d-41af-8143-a5ea77d6e484 name=/runtime.v1.RuntimeService/Version
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.175294557Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=afa7b699-c0e0-4374-9e11-756ebf4c1dc1 name=/runtime.v1.ImageService/ImageFsInfo
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.177535971Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761986107177505680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589266,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=afa7b699-c0e0-4374-9e11-756ebf4c1dc1 name=/runtime.v1.ImageService/ImageFsInfo
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.178170517Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6fb2b345-b7aa-4fca-abcb-8f876c0cb862 name=/runtime.v1.RuntimeService/ListContainers
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.178239876Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6fb2b345-b7aa-4fca-abcb-8f876c0cb862 name=/runtime.v1.RuntimeService/ListContainers
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.178607432Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e65569ad242eed2f42be192467aae915357a9793d26ed5a6e7945d301ba01a3f,PodSandboxId:b923e49603933e8a8bf8cde5cb22d75aa00ed15505044bdc2f3722730bc9692a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1761985965160235434,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02be2896-2e22-4268-9b74-1264e195dc37,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfe5ebc20e7d29df338579c7b935940f08efecb6543073275401ad73613c0441,PodSandboxId:e3bf65b1e951bd50ff236359e95effbce0685e2362bcf57334b104bc448dce0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761985927233968486,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 41aabf94-d190-48f2-ba3e-eab75a7075ad,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7efcdb0a07d6dd76b1c2d4864c48fc4d018b3f1fcf2047055101fb357ab5402,PodSandboxId:fbebd578d37fc65979c21d716efb62aaa0bdc5700000ae97a8fd119f04966082,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761985884810001741,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-8fm8x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ad6e2792-c8ab-4c5a-8932-7b144019c8b1,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5966eef3a473b1f609b7d8222b6f5cb744341011c83cf4de2c23e29dd53513f8,PodSandboxId:038f3ad417f7d8ea11852bbe5169dde89b7f55fc98d1efc2cd878a2fa5f77fa2,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761985863274782285,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-x52f8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52c90c76-9a17-481e-8bea-e4766c94af1d,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c723ffbb30e65a5e0a493cfdaaa9d4424e77ffc8ed9a9423c1fd00685b6eb142,PodSandboxId:aa7709f3fd4dd2b51b301f10a437f9fe28dfbf24957afc239b7f8ef9683a17ac,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761985863155849556,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jxdt4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2dbe5a05-8c54-4c06-bf27-0e68d39c6fc8,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0042304deba7bc2052b30a988a7b431d59594251701f19184b8f62d56f8ca692,PodSandboxId:02d520347d1562581e7699092ed7a1defec8d104a7505c32444e7dd40c4c8fef,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761985858850992899,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-gv7nr,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c1d68823-6547-42f4-8cfa-83aa02d048e0,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df886be9fafbf70a8d3cd565b72e63e9accaafcf77ccb67574da6ae4ccadbc36,PodSandboxId:2ce0d8a6fdd1004a3d61e28e44984e721ec23a2cfe2ef24da55bbe08fbad7e0c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761985838257325571,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36080b1f-6e52-4871-bf53-646c532b90bb,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e68896ea450e3830e7816ff23f703d6390da464016ebb409a2b0cd736e24cc3,PodSandboxId:e1a7671351bd03a5496e4f474cdc3f1f7931721e30a7857
380bfb8893edc4253,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761985814070432986,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wx8s2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81d7a980-35fc-40ae-a47f-4be99c0b6c65,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85b250df1323cf03f8796cb8581666b4a6c3e33180cbaa4d3112d86c9d3da69d,PodSandboxId:d6bdbbd
a4b8eebea54a6d4b11612cc3421e4fd0b33ad5049766f420887daae51,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761985812425666423,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0ce500-deaa-4b2b-9613-8479f762e6b8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0410edc0c61ce504b45bbb3f18b0f22536d22712f7391c14e439c94e84304edb,PodSandboxId:3190eac37d4b6a8aa43
e4721cd12ebe3adcbe31e9e8d80fa895d9682468d2506,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761985804664891001,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sjgmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66422fdc-0c8f-4909-b971-478ee3ec6443,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86baff1e8fbedaeb7a8e43e0536bd9a60d4f37c3aa20f5c70c599ad6b4d6bc3c,PodSandboxId:848ef4cde81cd7d34f8bbf866ab7a5b6b153a6bd60067090256c44b39c2c1667,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761985803690243517,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 476d893f-eeca-41a3-aa64-4f3340875cdf,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b52dfbf87aab4d25a48bfefa8ee19291d8e4de1116564600b891281e618d92,PodSandboxId:a0e49d4d3f46015d59fa7e02999142e82cb51c4839386803064e9b872786d6eb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761985792356872335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934186350482a3c9b581189c456f4b92,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"ho
stPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff69c00f6f21b264dd70ef4d32dceab07331462a53d1976d7851c9187893a8b8,PodSandboxId:43e34f9c901bf02badeea564568695021a6623629ca7ed1c4bf81e9643b167ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761985792351464371,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99b2c9fa1b7864b1a6ebbc1ce609e0c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b86f33110d8c4f52aa6c4337dccf801e3577bfad86423097372aa5d5887c14b,PodSandboxId:27bf80994e081a92e83e82c505d6a683c79d1a7ff80e3015d11ae5e017278ac0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761985792321663910,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons
-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b76248b00540f35ccebf20c3a3df87,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee77fcda0b4b15a8180e149cb83f9d4052c608c26842ed0e17df21e75e99285a,PodSandboxId:a0ba435c55f8d005babe77ecce58d2ef5237e8f0c69d8b47e160fd982ae90c17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761985792312766109,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89418dba440d5b9db768df6f8152cfb8,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6fb2b345-b7aa-4fca-abcb-8f876c0cb862 name=/runtime.v1.RuntimeService/ListContainers
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.218229482Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29ee6f96-c333-4edc-8a93-0d5210334203 name=/runtime.v1.RuntimeService/Version
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.218301762Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29ee6f96-c333-4edc-8a93-0d5210334203 name=/runtime.v1.RuntimeService/Version
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.220181907Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fff96c8c-c9b4-4cb3-b2c9-ef6ab1920b70 name=/runtime.v1.ImageService/ImageFsInfo
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.221601728Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761986107221571443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589266,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fff96c8c-c9b4-4cb3-b2c9-ef6ab1920b70 name=/runtime.v1.ImageService/ImageFsInfo
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.222325284Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=13ef7b32-4904-4359-a158-2cbabf452340 name=/runtime.v1.RuntimeService/ListContainers
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.222437316Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=13ef7b32-4904-4359-a158-2cbabf452340 name=/runtime.v1.RuntimeService/ListContainers
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.223240537Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e65569ad242eed2f42be192467aae915357a9793d26ed5a6e7945d301ba01a3f,PodSandboxId:b923e49603933e8a8bf8cde5cb22d75aa00ed15505044bdc2f3722730bc9692a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1761985965160235434,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02be2896-2e22-4268-9b74-1264e195dc37,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfe5ebc20e7d29df338579c7b935940f08efecb6543073275401ad73613c0441,PodSandboxId:e3bf65b1e951bd50ff236359e95effbce0685e2362bcf57334b104bc448dce0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761985927233968486,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 41aabf94-d190-48f2-ba3e-eab75a7075ad,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7efcdb0a07d6dd76b1c2d4864c48fc4d018b3f1fcf2047055101fb357ab5402,PodSandboxId:fbebd578d37fc65979c21d716efb62aaa0bdc5700000ae97a8fd119f04966082,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761985884810001741,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-8fm8x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ad6e2792-c8ab-4c5a-8932-7b144019c8b1,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5966eef3a473b1f609b7d8222b6f5cb744341011c83cf4de2c23e29dd53513f8,PodSandboxId:038f3ad417f7d8ea11852bbe5169dde89b7f55fc98d1efc2cd878a2fa5f77fa2,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761985863274782285,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-x52f8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52c90c76-9a17-481e-8bea-e4766c94af1d,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c723ffbb30e65a5e0a493cfdaaa9d4424e77ffc8ed9a9423c1fd00685b6eb142,PodSandboxId:aa7709f3fd4dd2b51b301f10a437f9fe28dfbf24957afc239b7f8ef9683a17ac,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761985863155849556,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jxdt4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2dbe5a05-8c54-4c06-bf27-0e68d39c6fc8,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0042304deba7bc2052b30a988a7b431d59594251701f19184b8f62d56f8ca692,PodSandboxId:02d520347d1562581e7699092ed7a1defec8d104a7505c32444e7dd40c4c8fef,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761985858850992899,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-gv7nr,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c1d68823-6547-42f4-8cfa-83aa02d048e0,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df886be9fafbf70a8d3cd565b72e63e9accaafcf77ccb67574da6ae4ccadbc36,PodSandboxId:2ce0d8a6fdd1004a3d61e28e44984e721ec23a2cfe2ef24da55bbe08fbad7e0c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761985838257325571,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36080b1f-6e52-4871-bf53-646c532b90bb,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e68896ea450e3830e7816ff23f703d6390da464016ebb409a2b0cd736e24cc3,PodSandboxId:e1a7671351bd03a5496e4f474cdc3f1f7931721e30a7857
380bfb8893edc4253,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761985814070432986,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wx8s2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81d7a980-35fc-40ae-a47f-4be99c0b6c65,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85b250df1323cf03f8796cb8581666b4a6c3e33180cbaa4d3112d86c9d3da69d,PodSandboxId:d6bdbbd
a4b8eebea54a6d4b11612cc3421e4fd0b33ad5049766f420887daae51,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761985812425666423,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0ce500-deaa-4b2b-9613-8479f762e6b8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0410edc0c61ce504b45bbb3f18b0f22536d22712f7391c14e439c94e84304edb,PodSandboxId:3190eac37d4b6a8aa43
e4721cd12ebe3adcbe31e9e8d80fa895d9682468d2506,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761985804664891001,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sjgmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66422fdc-0c8f-4909-b971-478ee3ec6443,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86baff1e8fbedaeb7a8e43e0536bd9a60d4f37c3aa20f5c70c599ad6b4d6bc3c,PodSandboxId:848ef4cde81cd7d34f8bbf866ab7a5b6b153a6bd60067090256c44b39c2c1667,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761985803690243517,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 476d893f-eeca-41a3-aa64-4f3340875cdf,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b52dfbf87aab4d25a48bfefa8ee19291d8e4de1116564600b891281e618d92,PodSandboxId:a0e49d4d3f46015d59fa7e02999142e82cb51c4839386803064e9b872786d6eb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761985792356872335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934186350482a3c9b581189c456f4b92,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"ho
stPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff69c00f6f21b264dd70ef4d32dceab07331462a53d1976d7851c9187893a8b8,PodSandboxId:43e34f9c901bf02badeea564568695021a6623629ca7ed1c4bf81e9643b167ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761985792351464371,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99b2c9fa1b7864b1a6ebbc1ce609e0c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b86f33110d8c4f52aa6c4337dccf801e3577bfad86423097372aa5d5887c14b,PodSandboxId:27bf80994e081a92e83e82c505d6a683c79d1a7ff80e3015d11ae5e017278ac0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761985792321663910,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons
-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b76248b00540f35ccebf20c3a3df87,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee77fcda0b4b15a8180e149cb83f9d4052c608c26842ed0e17df21e75e99285a,PodSandboxId:a0ba435c55f8d005babe77ecce58d2ef5237e8f0c69d8b47e160fd982ae90c17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761985792312766109,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89418dba440d5b9db768df6f8152cfb8,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=13ef7b32-4904-4359-a158-2cbabf452340 name=/runtime.v1.RuntimeService/ListContainers
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.264462062Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0f93b806-4836-4298-8197-c1d7d3afe6b0 name=/runtime.v1.RuntimeService/Version
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.264551985Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f93b806-4836-4298-8197-c1d7d3afe6b0 name=/runtime.v1.RuntimeService/Version
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.266098864Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74313532-8559-4dc4-86e8-4f96f99fffff name=/runtime.v1.ImageService/ImageFsInfo
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.267478232Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761986107267448623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589266,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74313532-8559-4dc4-86e8-4f96f99fffff name=/runtime.v1.ImageService/ImageFsInfo
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.268249770Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=657d974b-51e7-470b-95b1-6d2482433979 name=/runtime.v1.RuntimeService/ListContainers
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.268310012Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=657d974b-51e7-470b-95b1-6d2482433979 name=/runtime.v1.RuntimeService/ListContainers
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.269141565Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e65569ad242eed2f42be192467aae915357a9793d26ed5a6e7945d301ba01a3f,PodSandboxId:b923e49603933e8a8bf8cde5cb22d75aa00ed15505044bdc2f3722730bc9692a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1761985965160235434,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02be2896-2e22-4268-9b74-1264e195dc37,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfe5ebc20e7d29df338579c7b935940f08efecb6543073275401ad73613c0441,PodSandboxId:e3bf65b1e951bd50ff236359e95effbce0685e2362bcf57334b104bc448dce0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761985927233968486,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 41aabf94-d190-48f2-ba3e-eab75a7075ad,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7efcdb0a07d6dd76b1c2d4864c48fc4d018b3f1fcf2047055101fb357ab5402,PodSandboxId:fbebd578d37fc65979c21d716efb62aaa0bdc5700000ae97a8fd119f04966082,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761985884810001741,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-8fm8x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ad6e2792-c8ab-4c5a-8932-7b144019c8b1,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5966eef3a473b1f609b7d8222b6f5cb744341011c83cf4de2c23e29dd53513f8,PodSandboxId:038f3ad417f7d8ea11852bbe5169dde89b7f55fc98d1efc2cd878a2fa5f77fa2,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761985863274782285,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-x52f8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52c90c76-9a17-481e-8bea-e4766c94af1d,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c723ffbb30e65a5e0a493cfdaaa9d4424e77ffc8ed9a9423c1fd00685b6eb142,PodSandboxId:aa7709f3fd4dd2b51b301f10a437f9fe28dfbf24957afc239b7f8ef9683a17ac,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761985863155849556,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jxdt4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2dbe5a05-8c54-4c06-bf27-0e68d39c6fc8,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0042304deba7bc2052b30a988a7b431d59594251701f19184b8f62d56f8ca692,PodSandboxId:02d520347d1562581e7699092ed7a1defec8d104a7505c32444e7dd40c4c8fef,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761985858850992899,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-gv7nr,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c1d68823-6547-42f4-8cfa-83aa02d048e0,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df886be9fafbf70a8d3cd565b72e63e9accaafcf77ccb67574da6ae4ccadbc36,PodSandboxId:2ce0d8a6fdd1004a3d61e28e44984e721ec23a2cfe2ef24da55bbe08fbad7e0c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761985838257325571,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36080b1f-6e52-4871-bf53-646c532b90bb,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e68896ea450e3830e7816ff23f703d6390da464016ebb409a2b0cd736e24cc3,PodSandboxId:e1a7671351bd03a5496e4f474cdc3f1f7931721e30a7857
380bfb8893edc4253,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761985814070432986,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wx8s2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81d7a980-35fc-40ae-a47f-4be99c0b6c65,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85b250df1323cf03f8796cb8581666b4a6c3e33180cbaa4d3112d86c9d3da69d,PodSandboxId:d6bdbbd
a4b8eebea54a6d4b11612cc3421e4fd0b33ad5049766f420887daae51,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761985812425666423,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0ce500-deaa-4b2b-9613-8479f762e6b8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0410edc0c61ce504b45bbb3f18b0f22536d22712f7391c14e439c94e84304edb,PodSandboxId:3190eac37d4b6a8aa43
e4721cd12ebe3adcbe31e9e8d80fa895d9682468d2506,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761985804664891001,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sjgmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66422fdc-0c8f-4909-b971-478ee3ec6443,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86baff1e8fbedaeb7a8e43e0536bd9a60d4f37c3aa20f5c70c599ad6b4d6bc3c,PodSandboxId:848ef4cde81cd7d34f8bbf866ab7a5b6b153a6bd60067090256c44b39c2c1667,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761985803690243517,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 476d893f-eeca-41a3-aa64-4f3340875cdf,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b52dfbf87aab4d25a48bfefa8ee19291d8e4de1116564600b891281e618d92,PodSandboxId:a0e49d4d3f46015d59fa7e02999142e82cb51c4839386803064e9b872786d6eb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761985792356872335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934186350482a3c9b581189c456f4b92,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"ho
stPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff69c00f6f21b264dd70ef4d32dceab07331462a53d1976d7851c9187893a8b8,PodSandboxId:43e34f9c901bf02badeea564568695021a6623629ca7ed1c4bf81e9643b167ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761985792351464371,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99b2c9fa1b7864b1a6ebbc1ce609e0c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b86f33110d8c4f52aa6c4337dccf801e3577bfad86423097372aa5d5887c14b,PodSandboxId:27bf80994e081a92e83e82c505d6a683c79d1a7ff80e3015d11ae5e017278ac0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761985792321663910,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons
-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b76248b00540f35ccebf20c3a3df87,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee77fcda0b4b15a8180e149cb83f9d4052c608c26842ed0e17df21e75e99285a,PodSandboxId:a0ba435c55f8d005babe77ecce58d2ef5237e8f0c69d8b47e160fd982ae90c17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761985792312766109,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89418dba440d5b9db768df6f8152cfb8,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=657d974b-51e7-470b-95b1-6d2482433979 name=/runtime.v1.RuntimeService/ListContainers
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.284610907Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.284905226Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
e65569ad242ee docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 2 minutes ago Running nginx 0 b923e49603933 nginx
bfe5ebc20e7d2 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 3 minutes ago Running busybox 0 e3bf65b1e951b busybox
b7efcdb0a07d6 registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd 3 minutes ago Running controller 0 fbebd578d37fc ingress-nginx-controller-675c5ddd98-8fm8x
5966eef3a473b registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39 4 minutes ago Exited patch 0 038f3ad417f7d ingress-nginx-admission-patch-x52f8
c723ffbb30e65 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39 4 minutes ago Exited create 0 aa7709f3fd4dd ingress-nginx-admission-create-jxdt4
0042304deba7b ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb 4 minutes ago Running gadget 0 02d520347d156 gadget-gv7nr
df886be9fafbf docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 4 minutes ago Running minikube-ingress-dns 0 2ce0d8a6fdd10 kube-ingress-dns-minikube
8e68896ea450e docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 e1a7671351bd0 amd-gpu-device-plugin-wx8s2
85b250df1323c 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 d6bdbbda4b8ee storage-provisioner
0410edc0c61ce 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 5 minutes ago Running coredns 0 3190eac37d4b6 coredns-66bc5c9577-sjgmx
86baff1e8fbed fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7 5 minutes ago Running kube-proxy 0 848ef4cde81cd kube-proxy-d6zrs
29b52dfbf87aa 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115 5 minutes ago Running etcd 0 a0e49d4d3f460 etcd-addons-468489
ff69c00f6f21b 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813 5 minutes ago Running kube-scheduler 0 43e34f9c901bf kube-scheduler-addons-468489
0b86f33110d8c c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f 5 minutes ago Running kube-controller-manager 0 27bf80994e081 kube-controller-manager-addons-468489
ee77fcda0b4b1 c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97 5 minutes ago Running kube-apiserver 0 a0ba435c55f8d kube-apiserver-addons-468489
==> coredns [0410edc0c61ce504b45bbb3f18b0f22536d22712f7391c14e439c94e84304edb] <==
[INFO] 10.244.0.8:53217 - 52968 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.001296341s
[INFO] 10.244.0.8:53217 - 30963 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000150071s
[INFO] 10.244.0.8:53217 - 62719 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000212243s
[INFO] 10.244.0.8:53217 - 32338 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000414556s
[INFO] 10.244.0.8:53217 - 24575 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000173934s
[INFO] 10.244.0.8:53217 - 43313 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00010189s
[INFO] 10.244.0.8:53217 - 31640 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000208794s
[INFO] 10.244.0.8:51859 - 30571 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00021214s
[INFO] 10.244.0.8:51859 - 30236 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000274982s
[INFO] 10.244.0.8:54840 - 56669 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000123094s
[INFO] 10.244.0.8:54840 - 56427 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085391s
[INFO] 10.244.0.8:35736 - 19956 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000104725s
[INFO] 10.244.0.8:35736 - 19463 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000147811s
[INFO] 10.244.0.8:38204 - 37170 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000118981s
[INFO] 10.244.0.8:38204 - 36992 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000160432s
[INFO] 10.244.0.23:50070 - 19806 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00087431s
[INFO] 10.244.0.23:58901 - 33904 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000216667s
[INFO] 10.244.0.23:57004 - 57995 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000111606s
[INFO] 10.244.0.23:48375 - 37878 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000120679s
[INFO] 10.244.0.23:49540 - 50340 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000139635s
[INFO] 10.244.0.23:39694 - 2307 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088413s
[INFO] 10.244.0.23:52831 - 64898 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001332897s
[INFO] 10.244.0.23:37033 - 25300 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001203329s
[INFO] 10.244.0.26:43125 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000392449s
[INFO] 10.244.0.26:40324 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000157102s
==> describe nodes <==
Name: addons-468489
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-468489
kubernetes.io/os=linux
minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
minikube.k8s.io/name=addons-468489
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_11_01T08_29_58_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-468489
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 01 Nov 2025 08:29:55 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-468489
AcquireTime: <unset>
RenewTime: Sat, 01 Nov 2025 08:35:04 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 01 Nov 2025 08:33:01 +0000 Sat, 01 Nov 2025 08:29:52 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 01 Nov 2025 08:33:01 +0000 Sat, 01 Nov 2025 08:29:52 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 01 Nov 2025 08:33:01 +0000 Sat, 01 Nov 2025 08:29:52 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 01 Nov 2025 08:33:01 +0000 Sat, 01 Nov 2025 08:29:58 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.108
Hostname: addons-468489
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001784Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001784Ki
pods: 110
System Info:
Machine ID: 839602306f48496481c1c1246eb542bd
System UUID: 83960230-6f48-4964-81c1-c1246eb542bd
Boot ID: 80856773-1675-4201-abf1-d791538d2349
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m4s
default hello-world-app-5d498dc89-5x257 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m27s
gadget gadget-gv7nr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m57s
ingress-nginx ingress-nginx-controller-675c5ddd98-8fm8x 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m56s
kube-system amd-gpu-device-plugin-wx8s2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m1s
kube-system coredns-66bc5c9577-sjgmx 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 5m4s
kube-system etcd-addons-468489 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 5m11s
kube-system kube-apiserver-addons-468489 250m (12%) 0 (0%) 0 (0%) 0 (0%) 5m10s
kube-system kube-controller-manager-addons-468489 200m (10%) 0 (0%) 0 (0%) 0 (0%) 5m10s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m59s
kube-system kube-proxy-d6zrs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m4s
kube-system kube-scheduler-addons-468489 100m (5%) 0 (0%) 0 (0%) 0 (0%) 5m10s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m58s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 5m2s kube-proxy
Normal NodeHasSufficientMemory 5m16s (x8 over 5m16s) kubelet Node addons-468489 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m16s (x8 over 5m16s) kubelet Node addons-468489 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m16s (x7 over 5m16s) kubelet Node addons-468489 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m16s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m10s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 5m10s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 5m10s kubelet Node addons-468489 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m10s kubelet Node addons-468489 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m10s kubelet Node addons-468489 status is now: NodeHasSufficientPID
Normal NodeReady 5m9s kubelet Node addons-468489 status is now: NodeReady
Normal RegisteredNode 5m5s node-controller Node addons-468489 event: Registered Node addons-468489 in Controller
==> dmesg <==
[ +4.167563] kauditd_printk_skb: 371 callbacks suppressed
[ +6.347083] kauditd_printk_skb: 5 callbacks suppressed
[ +10.002164] kauditd_printk_skb: 11 callbacks suppressed
[ +5.133693] kauditd_printk_skb: 32 callbacks suppressed
[ +10.261342] kauditd_printk_skb: 32 callbacks suppressed
[ +5.193488] kauditd_printk_skb: 11 callbacks suppressed
[Nov 1 08:31] kauditd_printk_skb: 131 callbacks suppressed
[ +4.902464] kauditd_printk_skb: 111 callbacks suppressed
[ +3.423825] kauditd_printk_skb: 105 callbacks suppressed
[ +0.195096] kauditd_printk_skb: 74 callbacks suppressed
[ +4.560016] kauditd_printk_skb: 32 callbacks suppressed
[ +8.087978] kauditd_printk_skb: 17 callbacks suppressed
[Nov 1 08:32] kauditd_printk_skb: 2 callbacks suppressed
[ +13.017590] kauditd_printk_skb: 41 callbacks suppressed
[ +6.067331] kauditd_printk_skb: 22 callbacks suppressed
[ +5.375927] kauditd_printk_skb: 38 callbacks suppressed
[ +2.179313] kauditd_printk_skb: 105 callbacks suppressed
[ +0.000545] kauditd_printk_skb: 179 callbacks suppressed
[ +3.908041] kauditd_printk_skb: 113 callbacks suppressed
[ +2.395653] kauditd_printk_skb: 112 callbacks suppressed
[Nov 1 08:33] kauditd_printk_skb: 57 callbacks suppressed
[ +0.000024] kauditd_printk_skb: 10 callbacks suppressed
[ +5.084125] kauditd_printk_skb: 41 callbacks suppressed
[ +0.514037] kauditd_printk_skb: 130 callbacks suppressed
[Nov 1 08:35] kauditd_printk_skb: 7 callbacks suppressed
==> etcd [29b52dfbf87aab4d25a48bfefa8ee19291d8e4de1116564600b891281e618d92] <==
{"level":"warn","ts":"2025-11-01T08:31:19.699735Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.565137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-01T08:31:19.699771Z","caller":"traceutil/trace.go:172","msg":"trace[503839985] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1166; }","duration":"115.593687ms","start":"2025-11-01T08:31:19.584155Z","end":"2025-11-01T08:31:19.699749Z","steps":["trace[503839985] 'agreement among raft nodes before linearized reading' (duration: 115.539752ms)"],"step_count":1}
{"level":"info","ts":"2025-11-01T08:31:22.994799Z","caller":"traceutil/trace.go:172","msg":"trace[345948826] transaction","detail":"{read_only:false; response_revision:1172; number_of_response:1; }","duration":"240.685188ms","start":"2025-11-01T08:31:22.754101Z","end":"2025-11-01T08:31:22.994786Z","steps":["trace[345948826] 'process raft request' (duration: 240.567422ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-01T08:31:24.043194Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.583924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-01T08:31:24.043245Z","caller":"traceutil/trace.go:172","msg":"trace[648144560] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1173; }","duration":"132.654828ms","start":"2025-11-01T08:31:23.910580Z","end":"2025-11-01T08:31:24.043235Z","steps":["trace[648144560] 'range keys from in-memory index tree' (duration: 132.488062ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-01T08:31:24.043933Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.829538ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-01T08:31:24.044030Z","caller":"traceutil/trace.go:172","msg":"trace[1821756979] range","detail":"{range_begin:/registry/flowschemas; range_end:; response_count:0; response_revision:1173; }","duration":"108.935277ms","start":"2025-11-01T08:31:23.935086Z","end":"2025-11-01T08:31:24.044022Z","steps":["trace[1821756979] 'range keys from in-memory index tree' (duration: 108.373344ms)"],"step_count":1}
{"level":"info","ts":"2025-11-01T08:31:59.494398Z","caller":"traceutil/trace.go:172","msg":"trace[1297873927] linearizableReadLoop","detail":"{readStateIndex:1312; appliedIndex:1312; }","duration":"254.981682ms","start":"2025-11-01T08:31:59.239394Z","end":"2025-11-01T08:31:59.494376Z","steps":["trace[1297873927] 'read index received' (duration: 254.942901ms)","trace[1297873927] 'applied index is now lower than readState.Index' (duration: 37.811µs)"],"step_count":2}
{"level":"warn","ts":"2025-11-01T08:31:59.494561Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"255.182709ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-01T08:31:59.494593Z","caller":"traceutil/trace.go:172","msg":"trace[1033651856] range","detail":"{range_begin:/registry/prioritylevelconfigurations; range_end:; response_count:0; response_revision:1268; }","duration":"255.249976ms","start":"2025-11-01T08:31:59.239335Z","end":"2025-11-01T08:31:59.494585Z","steps":["trace[1033651856] 'agreement among raft nodes before linearized reading' (duration: 255.154647ms)"],"step_count":1}
{"level":"info","ts":"2025-11-01T08:31:59.494582Z","caller":"traceutil/trace.go:172","msg":"trace[1134202419] transaction","detail":"{read_only:false; response_revision:1269; number_of_response:1; }","duration":"285.46888ms","start":"2025-11-01T08:31:59.209101Z","end":"2025-11-01T08:31:59.494570Z","steps":["trace[1134202419] 'process raft request' (duration: 285.32125ms)"],"step_count":1}
{"level":"info","ts":"2025-11-01T08:32:29.891161Z","caller":"traceutil/trace.go:172","msg":"trace[427803549] transaction","detail":"{read_only:false; response_revision:1435; number_of_response:1; }","duration":"169.610725ms","start":"2025-11-01T08:32:29.721506Z","end":"2025-11-01T08:32:29.891117Z","steps":["trace[427803549] 'process raft request' (duration: 169.479164ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-01T08:32:30.138883Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"166.897792ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-01T08:32:30.138948Z","caller":"traceutil/trace.go:172","msg":"trace[206656571] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1435; }","duration":"166.966997ms","start":"2025-11-01T08:32:29.971970Z","end":"2025-11-01T08:32:30.138937Z","steps":["trace[206656571] 'range keys from in-memory index tree' (duration: 166.814919ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-01T08:32:30.139143Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.7919ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:498"}
{"level":"info","ts":"2025-11-01T08:32:30.139165Z","caller":"traceutil/trace.go:172","msg":"trace[668058850] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1435; }","duration":"103.818543ms","start":"2025-11-01T08:32:30.035340Z","end":"2025-11-01T08:32:30.139159Z","steps":["trace[668058850] 'range keys from in-memory index tree' (duration: 103.578379ms)"],"step_count":1}
{"level":"info","ts":"2025-11-01T08:33:14.307184Z","caller":"traceutil/trace.go:172","msg":"trace[1945716621] linearizableReadLoop","detail":"{readStateIndex:1816; appliedIndex:1816; }","duration":"291.360612ms","start":"2025-11-01T08:33:14.015763Z","end":"2025-11-01T08:33:14.307124Z","steps":["trace[1945716621] 'read index received' (duration: 291.349008ms)","trace[1945716621] 'applied index is now lower than readState.Index' (duration: 6.911µs)"],"step_count":2}
{"level":"warn","ts":"2025-11-01T08:33:14.307513Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"291.713681ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" limit:1 ","response":"range_response_count:1 size:982"}
{"level":"info","ts":"2025-11-01T08:33:14.307540Z","caller":"traceutil/trace.go:172","msg":"trace[480338789] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc-restore; range_end:; response_count:1; response_revision:1744; }","duration":"291.773714ms","start":"2025-11-01T08:33:14.015759Z","end":"2025-11-01T08:33:14.307533Z","steps":["trace[480338789] 'agreement among raft nodes before linearized reading' (duration: 291.568212ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-01T08:33:14.308220Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"152.438339ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots/default/new-snapshot-demo\" limit:1 ","response":"range_response_count:1 size:1698"}
{"level":"info","ts":"2025-11-01T08:33:14.308272Z","caller":"traceutil/trace.go:172","msg":"trace[100684495] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots/default/new-snapshot-demo; range_end:; response_count:1; response_revision:1745; }","duration":"152.499891ms","start":"2025-11-01T08:33:14.155763Z","end":"2025-11-01T08:33:14.308263Z","steps":["trace[100684495] 'agreement among raft nodes before linearized reading' (duration: 152.384522ms)"],"step_count":1}
{"level":"info","ts":"2025-11-01T08:33:14.308729Z","caller":"traceutil/trace.go:172","msg":"trace[1502752599] transaction","detail":"{read_only:false; response_revision:1745; number_of_response:1; }","duration":"350.142852ms","start":"2025-11-01T08:33:13.958572Z","end":"2025-11-01T08:33:14.308715Z","steps":["trace[1502752599] 'process raft request' (duration: 348.630084ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-01T08:33:14.309342Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T08:33:13.958552Z","time spent":"350.286255ms","remote":"127.0.0.1:50332","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1737 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
{"level":"warn","ts":"2025-11-01T08:33:14.310683Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.169282ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
{"level":"info","ts":"2025-11-01T08:33:14.310840Z","caller":"traceutil/trace.go:172","msg":"trace[927265233] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1745; }","duration":"140.046605ms","start":"2025-11-01T08:33:14.170740Z","end":"2025-11-01T08:33:14.310786Z","steps":["trace[927265233] 'agreement among raft nodes before linearized reading' (duration: 138.730788ms)"],"step_count":1}
==> kernel <==
08:35:07 up 5 min, 0 users, load average: 0.54, 1.19, 0.64
Linux addons-468489 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [ee77fcda0b4b15a8180e149cb83f9d4052c608c26842ed0e17df21e75e99285a] <==
E1101 08:30:44.904270 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.231.114:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.231.114:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.231.114:443: connect: connection refused" logger="UnhandledError"
E1101 08:30:44.908432 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.231.114:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.231.114:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.231.114:443: connect: connection refused" logger="UnhandledError"
I1101 08:30:44.983978 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1101 08:32:13.481602 1 conn.go:339] Error on socket receive: read tcp 192.168.39.108:8443->192.168.39.1:49294: use of closed network connection
E1101 08:32:13.675594 1 conn.go:339] Error on socket receive: read tcp 192.168.39.108:8443->192.168.39.1:49334: use of closed network connection
I1101 08:32:22.879757 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.181.126"}
I1101 08:32:40.508929 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1101 08:32:40.702266 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.132.59"}
I1101 08:32:45.923404 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1101 08:33:02.265787 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
E1101 08:33:06.689523 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
I1101 08:33:24.612706 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1101 08:33:24.612785 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1101 08:33:24.663052 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1101 08:33:24.663116 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1101 08:33:24.668030 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1101 08:33:24.668087 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1101 08:33:24.732712 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1101 08:33:24.732819 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1101 08:33:24.817186 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1101 08:33:24.817229 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1101 08:33:25.668384 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1101 08:33:25.818268 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1101 08:33:25.846076 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I1101 08:35:06.036671 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.54.88"}
==> kube-controller-manager [0b86f33110d8c4f52aa6c4337dccf801e3577bfad86423097372aa5d5887c14b] <==
E1101 08:33:32.391509 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
I1101 08:33:33.355260 1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
I1101 08:33:33.355414 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1101 08:33:33.505239 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 08:33:33.507051 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 08:33:35.293136 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 08:33:35.294135 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 08:33:40.024875 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 08:33:40.025933 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 08:33:45.601954 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 08:33:45.603253 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 08:33:46.784125 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 08:33:46.785602 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 08:33:57.853519 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 08:33:57.854523 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 08:34:05.950610 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 08:34:05.952184 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 08:34:06.199633 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 08:34:06.200594 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 08:34:32.625917 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 08:34:32.626988 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 08:34:33.915013 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 08:34:33.916495 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 08:34:49.513756 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 08:34:49.514881 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [86baff1e8fbedaeb7a8e43e0536bd9a60d4f37c3aa20f5c70c599ad6b4d6bc3c] <==
I1101 08:30:04.379607 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1101 08:30:04.484250 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1101 08:30:04.489669 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.108"]
E1101 08:30:04.491046 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1101 08:30:04.711753 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1101 08:30:04.712094 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1101 08:30:04.712128 1 server_linux.go:132] "Using iptables Proxier"
I1101 08:30:04.749031 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1101 08:30:04.750676 1 server.go:527] "Version info" version="v1.34.1"
I1101 08:30:04.750780 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1101 08:30:04.789116 1 config.go:200] "Starting service config controller"
I1101 08:30:04.887480 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1101 08:30:04.798829 1 config.go:403] "Starting serviceCIDR config controller"
I1101 08:30:04.887506 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1101 08:30:04.809107 1 config.go:309] "Starting node config controller"
I1101 08:30:04.887513 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1101 08:30:04.887518 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1101 08:30:04.798814 1 config.go:106] "Starting endpoint slice config controller"
I1101 08:30:04.914825 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1101 08:30:04.914837 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1101 08:30:04.914927 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1101 08:30:04.950616 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
==> kube-scheduler [ff69c00f6f21b264dd70ef4d32dceab07331462a53d1976d7851c9187893a8b8] <==
E1101 08:29:55.155934 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1101 08:29:55.156453 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1101 08:29:55.156546 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1101 08:29:55.156639 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1101 08:29:55.156743 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1101 08:29:55.156934 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1101 08:29:55.157981 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1101 08:29:55.993062 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1101 08:29:55.993972 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1101 08:29:56.064386 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1101 08:29:56.089171 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1101 08:29:56.120020 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1101 08:29:56.150183 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1101 08:29:56.157399 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1101 08:29:56.196761 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1101 08:29:56.204599 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1101 08:29:56.250551 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1101 08:29:56.276897 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1101 08:29:56.286300 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1101 08:29:56.307935 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1101 08:29:56.313939 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1101 08:29:56.329734 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1101 08:29:56.372416 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1101 08:29:56.443907 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
I1101 08:29:59.137411 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Nov 01 08:33:27 addons-468489 kubelet[1503]: E1101 08:33:27.864747 1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761986007864261615 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:589266} inodes_used:{value:201}}"
Nov 01 08:33:27 addons-468489 kubelet[1503]: E1101 08:33:27.864768 1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761986007864261615 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:589266} inodes_used:{value:201}}"
Nov 01 08:33:37 addons-468489 kubelet[1503]: E1101 08:33:37.868468 1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761986017867919102 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:589266} inodes_used:{value:201}}"
Nov 01 08:33:37 addons-468489 kubelet[1503]: E1101 08:33:37.868491 1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761986017867919102 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:589266} inodes_used:{value:201}}"
Nov 01 08:33:47 addons-468489 kubelet[1503]: E1101 08:33:47.871611 1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761986027870696856 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:589266} inodes_used:{value:201}}"
Nov 01 08:33:47 addons-468489 kubelet[1503]: E1101 08:33:47.871721 1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761986027870696856 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:589266} inodes_used:{value:201}}"
Nov 01 08:33:50 addons-468489 kubelet[1503]: I1101 08:33:50.702165 1503 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-wx8s2" secret="" err="secret \"gcp-auth\" not found"
Nov 01 08:33:57 addons-468489 kubelet[1503]: E1101 08:33:57.875474 1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761986037875012928 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:589266} inodes_used:{value:201}}"
Nov 01 08:33:57 addons-468489 kubelet[1503]: E1101 08:33:57.875503 1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761986037875012928 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:589266} inodes_used:{value:201}}"
Nov 01 08:34:00 addons-468489 kubelet[1503]: I1101 08:34:00.825888 1503 scope.go:117] "RemoveContainer" containerID="c661ec10bb22123253e40ccaedcab1d71525f402c2aaa51013388b56677a457f"
Nov 01 08:34:07 addons-468489 kubelet[1503]: E1101 08:34:07.877889 1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761986047877611034 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:589266} inodes_used:{value:201}}"
Nov 01 08:34:07 addons-468489 kubelet[1503]: E1101 08:34:07.877927 1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761986047877611034 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:589266} inodes_used:{value:201}}"
Nov 01 08:34:17 addons-468489 kubelet[1503]: E1101 08:34:17.880433 1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761986057880012589 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:589266} inodes_used:{value:201}}"
Nov 01 08:34:17 addons-468489 kubelet[1503]: E1101 08:34:17.880716 1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761986057880012589 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:589266} inodes_used:{value:201}}"
Nov 01 08:34:25 addons-468489 kubelet[1503]: I1101 08:34:25.707458 1503 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Nov 01 08:34:27 addons-468489 kubelet[1503]: E1101 08:34:27.886750 1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761986067883729634 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:589266} inodes_used:{value:201}}"
Nov 01 08:34:27 addons-468489 kubelet[1503]: E1101 08:34:27.887040 1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761986067883729634 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:589266} inodes_used:{value:201}}"
Nov 01 08:34:37 addons-468489 kubelet[1503]: E1101 08:34:37.889714 1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761986077889303130 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:589266} inodes_used:{value:201}}"
Nov 01 08:34:37 addons-468489 kubelet[1503]: E1101 08:34:37.889741 1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761986077889303130 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:589266} inodes_used:{value:201}}"
Nov 01 08:34:47 addons-468489 kubelet[1503]: E1101 08:34:47.894155 1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761986087893064022 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:589266} inodes_used:{value:201}}"
Nov 01 08:34:47 addons-468489 kubelet[1503]: E1101 08:34:47.894231 1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761986087893064022 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:589266} inodes_used:{value:201}}"
Nov 01 08:34:57 addons-468489 kubelet[1503]: E1101 08:34:57.898114 1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761986097896509378 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:589266} inodes_used:{value:201}}"
Nov 01 08:34:57 addons-468489 kubelet[1503]: E1101 08:34:57.898168 1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761986097896509378 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:589266} inodes_used:{value:201}}"
Nov 01 08:34:58 addons-468489 kubelet[1503]: I1101 08:34:58.702933 1503 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-wx8s2" secret="" err="secret \"gcp-auth\" not found"
Nov 01 08:35:06 addons-468489 kubelet[1503]: I1101 08:35:06.107834 1503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sd28\" (UniqueName: \"kubernetes.io/projected/093032c3-57ab-46d6-9c77-d68ca1ac57fb-kube-api-access-7sd28\") pod \"hello-world-app-5d498dc89-5x257\" (UID: \"093032c3-57ab-46d6-9c77-d68ca1ac57fb\") " pod="default/hello-world-app-5d498dc89-5x257"
==> storage-provisioner [85b250df1323cf03f8796cb8581666b4a6c3e33180cbaa4d3112d86c9d3da69d] <==
W1101 08:34:42.820670 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:34:44.824071 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:34:44.828706 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:34:46.832184 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:34:46.840998 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:34:48.844054 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:34:48.849584 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:34:50.852819 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:34:50.862188 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:34:52.866817 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:34:52.871621 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:34:54.875011 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:34:54.883325 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:34:56.887527 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:34:56.892996 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:34:58.896146 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:34:58.902028 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:35:00.905131 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:35:00.910451 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:35:02.914223 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:35:02.919418 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:35:04.923605 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:35:04.930627 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:35:06.936297 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 08:35:06.945235 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-468489 -n addons-468489
helpers_test.go:269: (dbg) Run: kubectl --context addons-468489 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-5x257 ingress-nginx-admission-create-jxdt4 ingress-nginx-admission-patch-x52f8
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context addons-468489 describe pod hello-world-app-5d498dc89-5x257 ingress-nginx-admission-create-jxdt4 ingress-nginx-admission-patch-x52f8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-468489 describe pod hello-world-app-5d498dc89-5x257 ingress-nginx-admission-create-jxdt4 ingress-nginx-admission-patch-x52f8: exit status 1 (76.563955ms)
-- stdout --
Name: hello-world-app-5d498dc89-5x257
Namespace: default
Priority: 0
Service Account: default
Node: addons-468489/192.168.39.108
Start Time: Sat, 01 Nov 2025 08:35:05 +0000
Labels: app=hello-world-app
pod-template-hash=5d498dc89
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-5d498dc89
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7sd28 (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-7sd28:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3s default-scheduler Successfully assigned default/hello-world-app-5d498dc89-5x257 to addons-468489
Normal Pulling 2s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-jxdt4" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-x52f8" not found
** /stderr **
helpers_test.go:287: kubectl --context addons-468489 describe pod hello-world-app-5d498dc89-5x257 ingress-nginx-admission-create-jxdt4 ingress-nginx-admission-patch-x52f8: exit status 1
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-468489 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-468489 addons disable ingress-dns --alsologtostderr -v=1: (1.097774064s)
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-468489 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-468489 addons disable ingress --alsologtostderr -v=1: (7.73839298s)
--- FAIL: TestAddons/parallel/Ingress (157.05s)