=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run: kubectl --context addons-086339 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run: kubectl --context addons-086339 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run: kubectl --context addons-086339 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [80f28ba1-b1ac-4f7a-9a35-3fd834d8e54e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-086339 -n addons-086339
                                                
                                                addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-11-01 10:01:11.614230965 +0000 UTC m=+686.356438952
addons_test.go:252: (dbg) Run: kubectl --context addons-086339 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-086339 describe po nginx -n default:
Name: nginx
Namespace: default
Priority: 0
Service Account: default
Node: addons-086339/192.168.39.58
Start Time: Sat, 01 Nov 2025 09:53:11 +0000
Labels: run=nginx
Annotations: <none>
Status: Pending
IP: 10.244.0.29
IPs:
IP: 10.244.0.29
Containers:
nginx:
Container ID:
Image: docker.io/nginx:alpine
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sggwf (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-sggwf:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m default-scheduler Successfully assigned default/nginx to addons-086339
Warning Failed 5m4s kubelet Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 100s (x3 over 6m53s) kubelet Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 100s (x4 over 6m53s) kubelet Error: ErrImagePull
Normal BackOff 29s (x11 over 6m52s) kubelet Back-off pulling image "docker.io/nginx:alpine"
Warning Failed 29s (x11 over 6m52s) kubelet Error: ImagePullBackOff
Normal Pulling 15s (x5 over 8m) kubelet Pulling image "docker.io/nginx:alpine"
addons_test.go:252: (dbg) Run: kubectl --context addons-086339 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-086339 logs nginx -n default: exit status 1 (72.359798ms)
** stderr **
Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image
** /stderr **
addons_test.go:252: kubectl --context addons-086339 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-086339 -n addons-086339
                                                
                                                helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p addons-086339 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-086339 logs -n 25: (1.320665297s)
helpers_test.go:260: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ --all │ minikube │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
│ delete │ -p download-only-036288 │ download-only-036288 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
│ delete │ -p download-only-319914 │ download-only-319914 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
│ delete │ -p download-only-036288 │ download-only-036288 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
│ start │ --download-only -p binary-mirror-623089 --alsologtostderr --binary-mirror http://127.0.0.1:33603 --driver=kvm2 --container-runtime=crio │ binary-mirror-623089 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ │
│ delete │ -p binary-mirror-623089 │ binary-mirror-623089 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
│ addons │ enable dashboard -p addons-086339 │ addons-086339 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ │
│ addons │ disable dashboard -p addons-086339 │ addons-086339 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ │
│ start │ -p addons-086339 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-086339 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:52 UTC │
│ addons │ addons-086339 addons disable volcano --alsologtostderr -v=1 │ addons-086339 │ jenkins │ v1.37.0 │ 01 Nov 25 09:52 UTC │ 01 Nov 25 09:52 UTC │
│ addons │ addons-086339 addons disable gcp-auth --alsologtostderr -v=1 │ addons-086339 │ jenkins │ v1.37.0 │ 01 Nov 25 09:52 UTC │ 01 Nov 25 09:52 UTC │
│ addons │ enable headlamp -p addons-086339 --alsologtostderr -v=1 │ addons-086339 │ jenkins │ v1.37.0 │ 01 Nov 25 09:52 UTC │ 01 Nov 25 09:52 UTC │
│ addons │ addons-086339 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-086339 │ jenkins │ v1.37.0 │ 01 Nov 25 09:52 UTC │ 01 Nov 25 09:52 UTC │
│ addons │ addons-086339 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-086339 │ jenkins │ v1.37.0 │ 01 Nov 25 09:52 UTC │ 01 Nov 25 09:53 UTC │
│ addons │ addons-086339 addons disable headlamp --alsologtostderr -v=1 │ addons-086339 │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
│ addons │ addons-086339 addons disable metrics-server --alsologtostderr -v=1 │ addons-086339 │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
│ ip │ addons-086339 ip │ addons-086339 │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
│ addons │ addons-086339 addons disable registry --alsologtostderr -v=1 │ addons-086339 │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
│ addons │ addons-086339 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-086339 │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-086339 │ addons-086339 │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
│ addons │ addons-086339 addons disable registry-creds --alsologtostderr -v=1 │ addons-086339 │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
│ addons │ addons-086339 addons disable yakd --alsologtostderr -v=1 │ addons-086339 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
│ addons │ addons-086339 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-086339 │ jenkins │ v1.37.0 │ 01 Nov 25 09:55 UTC │ 01 Nov 25 09:56 UTC │
│ addons │ addons-086339 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-086339 │ jenkins │ v1.37.0 │ 01 Nov 25 09:59 UTC │ 01 Nov 25 09:59 UTC │
│ addons │ addons-086339 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-086339 │ jenkins │ v1.37.0 │ 01 Nov 25 09:59 UTC │ 01 Nov 25 09:59 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/11/01 09:49:57
Running on machine: ubuntu-20-agent-3
Binary: Built with gc go1.24.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1101 09:49:57.488461 74584 out.go:360] Setting OutFile to fd 1 ...
I1101 09:49:57.488721 74584 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:57.488731 74584 out.go:374] Setting ErrFile to fd 2...
I1101 09:49:57.488735 74584 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:57.488932 74584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
I1101 09:49:57.489456 74584 out.go:368] Setting JSON to false
	I1101 09:49:57.490315   74584 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":5545,"bootTime":1761985052,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
                                                
                                                I1101 09:49:57.490405 74584 start.go:143] virtualization: kvm guest
I1101 09:49:57.492349 74584 out.go:179] * [addons-086339] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1101 09:49:57.493732 74584 notify.go:221] Checking for updates...
I1101 09:49:57.493769 74584 out.go:179] - MINIKUBE_LOCATION=21830
I1101 09:49:57.495124 74584 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1101 09:49:57.496430 74584 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
I1101 09:49:57.497763 74584 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
I1101 09:49:57.499098 74584 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1101 09:49:57.500291 74584 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1101 09:49:57.501672 74584 driver.go:422] Setting default libvirt URI to qemu:///system
I1101 09:49:57.530798 74584 out.go:179] * Using the kvm2 driver based on user configuration
I1101 09:49:57.531916 74584 start.go:309] selected driver: kvm2
I1101 09:49:57.531929 74584 start.go:930] validating driver "kvm2" against <nil>
	I1101 09:49:57.531940   74584 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
                                                
                                                I1101 09:49:57.532704 74584 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1101 09:49:57.532950 74584 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1101 09:49:57.532995 74584 cni.go:84] Creating CNI manager for ""
I1101 09:49:57.533055 74584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1101 09:49:57.533066 74584 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1101 09:49:57.533123 74584 start.go:353] cluster config:
	{Name:addons-086339 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-086339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
                                                
                                                rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
                                                
                                                utoPauseInterval:1m0s}
	I1101 09:49:57.533236   74584 iso.go:125] acquiring lock: {Name:mk49d9a272bb99d336f82dfc5631a4c8ce9271c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
                                                
                                                I1101 09:49:57.534643 74584 out.go:179] * Starting "addons-086339" primary control-plane node in "addons-086339" cluster
I1101 09:49:57.535623 74584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1101 09:49:57.535667 74584 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
I1101 09:49:57.535680 74584 cache.go:59] Caching tarball of preloaded images
I1101 09:49:57.535759 74584 preload.go:233] Found /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1101 09:49:57.535771 74584 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
I1101 09:49:57.536122 74584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/config.json ...
	I1101 09:49:57.536151   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/config.json: {Name:mka52b297897069cd677da03eb710fe0f89e4afc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
                                                
                                                	I1101 09:49:57.536283   74584 start.go:360] acquireMachinesLock for addons-086339: {Name:mk53a05d125fe91ead2a39c6bbf2ba926c471e2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
                                                
                                                I1101 09:49:57.536359 74584 start.go:364] duration metric: took 60.989µs to acquireMachinesLock for "addons-086339"
	I1101 09:49:57.536383   74584 start.go:93] Provisioning new machine with config: &{Name:addons-086339 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
                                                
                                                netesVersion:v1.34.1 ClusterName:addons-086339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
                                                
                                                : DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
                                                
                                                I1101 09:49:57.536443 74584 start.go:125] createHost starting for "" (driver="kvm2")
I1101 09:49:57.537962 74584 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1101 09:49:57.538116 74584 start.go:159] libmachine.API.Create for "addons-086339" (driver="kvm2")
I1101 09:49:57.538147 74584 client.go:173] LocalClient.Create starting
I1101 09:49:57.538241 74584 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem
I1101 09:49:57.899320 74584 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem
I1101 09:49:58.572079 74584 main.go:143] libmachine: creating domain...
I1101 09:49:58.572106 74584 main.go:143] libmachine: creating network...
I1101 09:49:58.573844 74584 main.go:143] libmachine: found existing default network
I1101 09:49:58.574184 74584 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
	I1101 09:49:58.574920   74584 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c7bfb0}
                                                
                                                I1101 09:49:58.575053 74584 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-086339</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1101 09:49:58.580872 74584 main.go:143] libmachine: creating private network mk-addons-086339 192.168.39.0/24...
I1101 09:49:58.651337 74584 main.go:143] libmachine: private network mk-addons-086339 192.168.39.0/24 created
I1101 09:49:58.651625 74584 main.go:143] libmachine: <network>
<name>mk-addons-086339</name>
<uuid>3e8e4cbf-1e3f-4b76-b08f-c763f9bae7dc</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:4f:55:bf'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1101 09:49:58.651651 74584 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339 ...
I1101 09:49:58.651674 74584 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21830-70113/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
I1101 09:49:58.651685 74584 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21830-70113/.minikube
I1101 09:49:58.651769 74584 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21830-70113/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21830-70113/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
I1101 09:49:58.889523 74584 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa...
I1101 09:49:59.320606 74584 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/addons-086339.rawdisk...
I1101 09:49:59.320670 74584 main.go:143] libmachine: Writing magic tar header
I1101 09:49:59.320695 74584 main.go:143] libmachine: Writing SSH key tar header
I1101 09:49:59.320769 74584 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339 ...
I1101 09:49:59.320832 74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339
I1101 09:49:59.320855 74584 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339 (perms=drwx------)
I1101 09:49:59.320865 74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113/.minikube/machines
I1101 09:49:59.320880 74584 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113/.minikube/machines (perms=drwxr-xr-x)
I1101 09:49:59.320892 74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113/.minikube
I1101 09:49:59.320902 74584 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113/.minikube (perms=drwxr-xr-x)
I1101 09:49:59.320910 74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113
I1101 09:49:59.320919 74584 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113 (perms=drwxrwxr-x)
I1101 09:49:59.320926 74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1101 09:49:59.320936 74584 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1101 09:49:59.320946 74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1101 09:49:59.320953 74584 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1101 09:49:59.320964 74584 main.go:143] libmachine: checking permissions on dir: /home
I1101 09:49:59.320971 74584 main.go:143] libmachine: skipping /home - not owner
I1101 09:49:59.320977 74584 main.go:143] libmachine: defining domain...
I1101 09:49:59.322386 74584 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-086339</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/addons-086339.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-086339'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1101 09:49:59.327390 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:41:14:53 in network default
I1101 09:49:59.328042 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:49:59.328057 74584 main.go:143] libmachine: starting domain...
I1101 09:49:59.328062 74584 main.go:143] libmachine: ensuring networks are active...
I1101 09:49:59.328857 74584 main.go:143] libmachine: Ensuring network default is active
I1101 09:49:59.329422 74584 main.go:143] libmachine: Ensuring network mk-addons-086339 is active
I1101 09:49:59.330127 74584 main.go:143] libmachine: getting domain XML...
I1101 09:49:59.331370 74584 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-086339</name>
<uuid>a0be334a-213a-4e9a-bad3-6168cb6c4d93</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/addons-086339.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:b9:a4:85'/>
<source network='mk-addons-086339'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:41:14:53'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1101 09:50:00.609088 74584 main.go:143] libmachine: waiting for domain to start...
I1101 09:50:00.610434 74584 main.go:143] libmachine: domain is now running
I1101 09:50:00.610456 74584 main.go:143] libmachine: waiting for IP...
I1101 09:50:00.611312 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:00.612106 74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
I1101 09:50:00.612125 74584 main.go:143] libmachine: trying to list again with source=arp
I1101 09:50:00.612466 74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
I1101 09:50:00.612543 74584 retry.go:31] will retry after 238.184391ms: waiting for domain to come up
I1101 09:50:00.851957 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:00.852980 74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
I1101 09:50:00.852999 74584 main.go:143] libmachine: trying to list again with source=arp
I1101 09:50:00.853378 74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
I1101 09:50:00.853417 74584 retry.go:31] will retry after 315.459021ms: waiting for domain to come up
I1101 09:50:01.170821 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:01.171618 74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
I1101 09:50:01.171637 74584 main.go:143] libmachine: trying to list again with source=arp
I1101 09:50:01.172000 74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
I1101 09:50:01.172045 74584 retry.go:31] will retry after 375.800667ms: waiting for domain to come up
I1101 09:50:01.549768 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:01.550551 74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
I1101 09:50:01.550568 74584 main.go:143] libmachine: trying to list again with source=arp
I1101 09:50:01.550912 74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
I1101 09:50:01.550947 74584 retry.go:31] will retry after 436.650242ms: waiting for domain to come up
I1101 09:50:01.989558 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:01.990329 74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
I1101 09:50:01.990346 74584 main.go:143] libmachine: trying to list again with source=arp
I1101 09:50:01.990674 74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
I1101 09:50:01.990717 74584 retry.go:31] will retry after 579.834412ms: waiting for domain to come up
I1101 09:50:02.572692 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:02.573467 74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
I1101 09:50:02.573488 74584 main.go:143] libmachine: trying to list again with source=arp
I1101 09:50:02.573815 74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
I1101 09:50:02.573865 74584 retry.go:31] will retry after 839.063755ms: waiting for domain to come up
I1101 09:50:03.414428 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:03.415319 74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
I1101 09:50:03.415342 74584 main.go:143] libmachine: trying to list again with source=arp
I1101 09:50:03.415659 74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
I1101 09:50:03.415702 74584 retry.go:31] will retry after 768.970672ms: waiting for domain to come up
I1101 09:50:04.186700 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:04.187419 74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
I1101 09:50:04.187437 74584 main.go:143] libmachine: trying to list again with source=arp
I1101 09:50:04.187709 74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
I1101 09:50:04.187746 74584 retry.go:31] will retry after 1.192575866s: waiting for domain to come up
I1101 09:50:05.382202 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:05.382884 74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
I1101 09:50:05.382907 74584 main.go:143] libmachine: trying to list again with source=arp
I1101 09:50:05.383270 74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
I1101 09:50:05.383321 74584 retry.go:31] will retry after 1.520355221s: waiting for domain to come up
I1101 09:50:06.906019 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:06.906685 74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
I1101 09:50:06.906702 74584 main.go:143] libmachine: trying to list again with source=arp
I1101 09:50:06.906966 74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
I1101 09:50:06.907000 74584 retry.go:31] will retry after 1.452783326s: waiting for domain to come up
I1101 09:50:08.361823 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:08.362686 74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
I1101 09:50:08.362711 74584 main.go:143] libmachine: trying to list again with source=arp
I1101 09:50:08.363062 74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
I1101 09:50:08.363109 74584 retry.go:31] will retry after 1.991395227s: waiting for domain to come up
I1101 09:50:10.357523 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:10.358353 74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
I1101 09:50:10.358372 74584 main.go:143] libmachine: trying to list again with source=arp
I1101 09:50:10.358693 74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
I1101 09:50:10.358739 74584 retry.go:31] will retry after 3.532288823s: waiting for domain to come up
I1101 09:50:13.893052 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:13.893671 74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
I1101 09:50:13.893684 74584 main.go:143] libmachine: trying to list again with source=arp
I1101 09:50:13.893975 74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
I1101 09:50:13.894012 74584 retry.go:31] will retry after 4.252229089s: waiting for domain to come up
I1101 09:50:18.147616 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:18.148327 74584 main.go:143] libmachine: domain addons-086339 has current primary IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:18.148350 74584 main.go:143] libmachine: found domain IP: 192.168.39.58
I1101 09:50:18.148365 74584 main.go:143] libmachine: reserving static IP address...
	I1101 09:50:18.148791   74584 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-086339", mac: "52:54:00:b9:a4:85", ip: "192.168.39.58"} in network mk-addons-086339
                                                
                                                I1101 09:50:18.327560 74584 main.go:143] libmachine: reserved static IP address 192.168.39.58 for domain addons-086339
I1101 09:50:18.327599 74584 main.go:143] libmachine: waiting for SSH...
I1101 09:50:18.327609 74584 main.go:143] libmachine: Getting to WaitForSSH function...
I1101 09:50:18.330699 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.331371   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:18.331408 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:18.331641 74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:18.331928   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
                                                
                                                I1101 09:50:18.331942 74584 main.go:143] libmachine: About to run SSH command:
exit 0
I1101 09:50:18.444329 74584 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1101 09:50:18.444817 74584 main.go:143] libmachine: domain creation complete
I1101 09:50:18.446547 74584 machine.go:94] provisionDockerMachine start ...
I1101 09:50:18.449158 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.449586   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:18.449617 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:18.449805 74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:18.450004   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
                                                
                                                I1101 09:50:18.450014 74584 main.go:143] libmachine: About to run SSH command:
hostname
I1101 09:50:18.560574 74584 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1101 09:50:18.560609 74584 buildroot.go:166] provisioning hostname "addons-086339"
I1101 09:50:18.564015 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.564582   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:18.564616 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:18.564819 74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:18.565060   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
                                                
                                                I1101 09:50:18.565073 74584 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-086339 && echo "addons-086339" | sudo tee /etc/hostname
I1101 09:50:18.692294 74584 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-086339
I1101 09:50:18.695361 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.695730   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:18.695754 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:18.695958 74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:18.696217   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
                                                
                                                I1101 09:50:18.696238 74584 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-086339' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-086339/g' /etc/hosts;
else
echo '127.0.1.1 addons-086339' | sudo tee -a /etc/hosts;
fi
fi
I1101 09:50:18.817833 74584 main.go:143] libmachine: SSH cmd err, output: <nil>:
	I1101 09:50:18.817861   74584 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21830-70113/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-70113/.minikube}
                                                
                                                I1101 09:50:18.817917 74584 buildroot.go:174] setting up certificates
I1101 09:50:18.817929 74584 provision.go:84] configureAuth start
I1101 09:50:18.820836 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.821182   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:18.821205 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:18.823468 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.823880   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:18.823917 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:18.824065 74584 provision.go:143] copyHostCerts
I1101 09:50:18.824126 74584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem (1082 bytes)
I1101 09:50:18.824236 74584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem (1123 bytes)
I1101 09:50:18.824293 74584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem (1675 bytes)
I1101 09:50:18.824393 74584 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem org=jenkins.addons-086339 san=[127.0.0.1 192.168.39.58 addons-086339 localhost minikube]
I1101 09:50:18.982158 74584 provision.go:177] copyRemoteCerts
I1101 09:50:18.982222 74584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1101 09:50:18.984649 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.985018   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:18.985044 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.985191   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
                                                
                                                I1101 09:50:19.074666 74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1101 09:50:19.105450 74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1101 09:50:19.136079 74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1101 09:50:19.165744 74584 provision.go:87] duration metric: took 347.798818ms to configureAuth
I1101 09:50:19.165785 74584 buildroot.go:189] setting minikube options for container-runtime
I1101 09:50:19.165985 74584 config.go:182] Loaded profile config "addons-086339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:50:19.168523 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.169168   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:19.169200 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:19.169383 74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:19.169583   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
                                                
                                                I1101 09:50:19.169597 74584 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1101 09:50:19.428804 74584 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1101 09:50:19.428828 74584 machine.go:97] duration metric: took 982.268013ms to provisionDockerMachine
I1101 09:50:19.428839 74584 client.go:176] duration metric: took 21.890685225s to LocalClient.Create
I1101 09:50:19.428858 74584 start.go:167] duration metric: took 21.89074228s to libmachine.API.Create "addons-086339"
I1101 09:50:19.428865 74584 start.go:293] postStartSetup for "addons-086339" (driver="kvm2")
I1101 09:50:19.428874 74584 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1101 09:50:19.428936 74584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1101 09:50:19.431801 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.432251   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:19.432273 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.432405   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
                                                
                                                I1101 09:50:19.520001 74584 ssh_runner.go:195] Run: cat /etc/os-release
I1101 09:50:19.525231 74584 info.go:137] Remote host: Buildroot 2025.02
I1101 09:50:19.525259 74584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/addons for local assets ...
I1101 09:50:19.525321 74584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/files for local assets ...
I1101 09:50:19.525345 74584 start.go:296] duration metric: took 96.474195ms for postStartSetup
I1101 09:50:19.528299 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.528696   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:19.528717 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:19.528916 74584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/config.json ...
I1101 09:50:19.529095 74584 start.go:128] duration metric: took 21.992639315s to createHost
I1101 09:50:19.531331 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.531699   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:19.531722 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:19.531876 74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:19.532065   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
                                                
                                                I1101 09:50:19.532075 74584 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1101 09:50:19.643235 74584 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761990619.607534656
I1101 09:50:19.643257 74584 fix.go:216] guest clock: 1761990619.607534656
I1101 09:50:19.643268 74584 fix.go:229] Guest: 2025-11-01 09:50:19.607534656 +0000 UTC Remote: 2025-11-01 09:50:19.52910603 +0000 UTC m=+22.094671738 (delta=78.428626ms)
I1101 09:50:19.643283 74584 fix.go:200] guest clock delta is within tolerance: 78.428626ms
I1101 09:50:19.643288 74584 start.go:83] releasing machines lock for "addons-086339", held for 22.106918768s
I1101 09:50:19.646471 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.646896   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:19.646926 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:19.647587 74584 ssh_runner.go:195] Run: cat /version.json
I1101 09:50:19.647618 74584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1101 09:50:19.650456 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.650903   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:19.650929 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:19.650937 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.651111   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
                                                
                                                	I1101 09:50:19.651498   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:19.651548 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.651722   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
                                                
                                                I1101 09:50:19.732914 74584 ssh_runner.go:195] Run: systemctl --version
I1101 09:50:19.761438 74584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1101 09:50:19.921978 74584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1101 09:50:19.929230 74584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:50:19.929321   74584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
                                                
                                                I1101 09:50:19.949743 74584 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1101 09:50:19.949779 74584 start.go:496] detecting cgroup driver to use...
I1101 09:50:19.949851 74584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1101 09:50:19.969767 74584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1101 09:50:19.988383 74584 docker.go:218] disabling cri-docker service (if available) ...
I1101 09:50:19.988445 74584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1101 09:50:20.006528 74584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1101 09:50:20.025137 74584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1101 09:50:20.177314 74584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1101 09:50:20.388642 74584 docker.go:234] disabling docker service ...
I1101 09:50:20.388724 74584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1101 09:50:20.405986 74584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1101 09:50:20.421236 74584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1101 09:50:20.585305 74584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1101 09:50:20.731424 74584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1101 09:50:20.748134 74584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1101 09:50:20.778555 74584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1101 09:50:20.778621 74584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1101 09:50:20.792483 74584 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1101 09:50:20.792563 74584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1101 09:50:20.806228 74584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1101 09:50:20.819314 74584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1101 09:50:20.832971 74584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1101 09:50:20.847580 74584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1101 09:50:20.861416 74584 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1101 09:50:20.884021 74584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1101 09:50:20.898082 74584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1101 09:50:20.909995 74584 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1101 09:50:20.910054 74584 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1101 09:50:20.932503 74584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1101 09:50:20.945456 74584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 09:50:21.091518 74584 ssh_runner.go:195] Run: sudo systemctl restart crio
I1101 09:50:21.209311 74584 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1101 09:50:21.209394 74584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1101 09:50:21.215638 74584 start.go:564] Will wait 60s for crictl version
I1101 09:50:21.215718 74584 ssh_runner.go:195] Run: which crictl
I1101 09:50:21.220104 74584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1101 09:50:21.265319 74584 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1101 09:50:21.265428 74584 ssh_runner.go:195] Run: crio --version
I1101 09:50:21.296407 74584 ssh_runner.go:195] Run: crio --version
I1101 09:50:21.330270 74584 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
I1101 09:50:21.333966 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:21.334360   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:21.334382 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:21.334577 74584 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
	I1101 09:50:21.339385   74584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
                                                
                                                	I1101 09:50:21.355743   74584 kubeadm.go:884] updating cluster {Name:addons-086339 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
                                                
                                                1 ClusterName:addons-086339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
                                                
                                                bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1101 09:50:21.355864 74584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1101 09:50:21.355925 74584 ssh_runner.go:195] Run: sudo crictl images --output json
I1101 09:50:21.393026 74584 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
I1101 09:50:21.393097 74584 ssh_runner.go:195] Run: which lz4
I1101 09:50:21.397900 74584 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1101 09:50:21.403032 74584 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1101 09:50:21.403064 74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
I1101 09:50:22.958959 74584 crio.go:462] duration metric: took 1.561103562s to copy over tarball
I1101 09:50:22.959030 74584 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1101 09:50:24.646069 74584 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.687012473s)
I1101 09:50:24.646110 74584 crio.go:469] duration metric: took 1.687120275s to extract the tarball
I1101 09:50:24.646124 74584 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1101 09:50:24.689384 74584 ssh_runner.go:195] Run: sudo crictl images --output json
I1101 09:50:24.745551 74584 crio.go:514] all images are preloaded for cri-o runtime.
I1101 09:50:24.745581 74584 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:50:24.745590   74584 kubeadm.go:935] updating node { 192.168.39.58 8443 v1.34.1 crio true true} ...
                                                
                                                I1101 09:50:24.745676 74584 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-086339 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
[Install]
config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-086339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
                                                
                                                I1101 09:50:24.745742 74584 ssh_runner.go:195] Run: crio config
I1101 09:50:24.792600 74584 cni.go:84] Creating CNI manager for ""
I1101 09:50:24.792624 74584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1101 09:50:24.792643 74584 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:50:24.792678   74584 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-086339 NodeName:addons-086339 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
                                                
                                                rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1101 09:50:24.792797 74584 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.58
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-086339"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.58"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1101 09:50:24.792863 74584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1101 09:50:24.805312 74584 binaries.go:44] Found k8s binaries, skipping transfer
I1101 09:50:24.805386 74584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1101 09:50:24.817318 74584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I1101 09:50:24.839738 74584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1101 09:50:24.861206 74584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
I1101 09:50:24.882598 74584 ssh_runner.go:195] Run: grep 192.168.39.58 control-plane.minikube.internal$ /etc/hosts
	I1101 09:50:24.887202   74584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
                                                
                                                I1101 09:50:24.903393 74584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 09:50:25.046563 74584 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1101 09:50:25.078339 74584 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339 for IP: 192.168.39.58
I1101 09:50:25.078373 74584 certs.go:195] generating shared ca certs ...
	I1101 09:50:25.078393   74584 certs.go:227] acquiring lock for ca certs: {Name:mk20731b316fbc22c351241cefc40924880eeba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
                                                
                                                I1101 09:50:25.078607 74584 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key
I1101 09:50:25.370750 74584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt ...
	I1101 09:50:25.370787   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt: {Name:mk44e2ef3879300ef465f5e14a88e17a335203c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
                                                
                                                I1101 09:50:25.370979 74584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key ...
	I1101 09:50:25.370991   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key: {Name:mk6a6a936cb10734e248a5e184dc212d0dd50fee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
                                                
                                                I1101 09:50:25.371084 74584 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key
I1101 09:50:25.596029 74584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.crt ...
	I1101 09:50:25.596060   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.crt: {Name:mk4883ce1337edc02ddc3ac7b72fc885fc718a43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
                                                
                                                I1101 09:50:25.596251 74584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key ...
	I1101 09:50:25.596263   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key: {Name:mk64aaf400461d117ff2d2f246459980ad32acba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
                                                
                                                I1101 09:50:25.596345 74584 certs.go:257] generating profile certs ...
I1101 09:50:25.596402 74584 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.key
I1101 09:50:25.596427 74584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt with IP's: []
I1101 09:50:25.837595 74584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt ...
	I1101 09:50:25.837629   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: {Name:mk6a3c2908e98c5011b9a353eff3f73fbb200e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
                                                
                                                I1101 09:50:25.837800 74584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.key ...
	I1101 09:50:25.837814   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.key: {Name:mke495d2d15563b5194e6cade83d0c75b9212db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
                                                
                                                I1101 09:50:25.837890 74584 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key.698c417c
I1101 09:50:25.837920 74584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt.698c417c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.58]
I1101 09:50:25.933112 74584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt.698c417c ...
	I1101 09:50:25.933142   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt.698c417c: {Name:mk0254e8775842aca5cd671155531f1ec86ec40f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
                                                
                                                I1101 09:50:25.933311 74584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key.698c417c ...
	I1101 09:50:25.933328   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key.698c417c: {Name:mk3e1746ccfcc3989b4b0944f75fafe8929108a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
                                                
                                                I1101 09:50:25.933413 74584 certs.go:382] copying /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt.698c417c -> /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt
I1101 09:50:25.933491 74584 certs.go:386] copying /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key.698c417c -> /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key
I1101 09:50:25.933552 74584 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.key
I1101 09:50:25.933569 74584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.crt with IP's: []
I1101 09:50:26.270478 74584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.crt ...
	I1101 09:50:26.270513   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.crt: {Name:mk40ee0c5f510c6df044b64c5c0ccf02f754f518 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
                                                
                                                I1101 09:50:26.270707 74584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.key ...
	I1101 09:50:26.270719   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.key: {Name:mk13d4f8cab34676a9c94f4e51f06fa6b4450e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
                                                
                                                I1101 09:50:26.270893 74584 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem (1675 bytes)
I1101 09:50:26.270934 74584 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem (1082 bytes)
I1101 09:50:26.270958 74584 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem (1123 bytes)
I1101 09:50:26.270980 74584 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem (1675 bytes)
I1101 09:50:26.271524 74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1101 09:50:26.304432 74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1101 09:50:26.336585 74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1101 09:50:26.370965 74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1101 09:50:26.404637 74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1101 09:50:26.438434 74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1101 09:50:26.470419 74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1101 09:50:26.505400 74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1101 09:50:26.538739 74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1101 09:50:26.571139 74584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1101 09:50:26.596933 74584 ssh_runner.go:195] Run: openssl version
I1101 09:50:26.604814 74584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1101 09:50:26.625168 74584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1101 09:50:26.631403 74584 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 1 09:50 /usr/share/ca-certificates/minikubeCA.pem
I1101 09:50:26.631463 74584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1101 09:50:26.639666 74584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1101 09:50:26.655106 74584 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1101 09:50:26.660616 74584 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:50:26.660681   74584 kubeadm.go:401] StartCluster: {Name:addons-086339 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
                                                
                                                lusterName:addons-086339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
                                                
                                                Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:50:26.660767   74584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
                                                
                                                I1101 09:50:26.660830 74584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1101 09:50:26.713279 74584 cri.go:89] found id: ""
I1101 09:50:26.713354 74584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1101 09:50:26.732360 74584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1101 09:50:26.753939 74584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1101 09:50:26.768399 74584 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1101 09:50:26.768428 74584 kubeadm.go:158] found existing configuration files:
I1101 09:50:26.768509 74584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1101 09:50:26.780652 74584 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1101 09:50:26.780726 74584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1101 09:50:26.792996 74584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1101 09:50:26.805190 74584 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1101 09:50:26.805252 74584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1101 09:50:26.817970 74584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1101 09:50:26.829425 74584 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1101 09:50:26.829521 74584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1101 09:50:26.842392 74584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1101 09:50:26.855031 74584 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1101 09:50:26.855120 74584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1101 09:50:26.868465 74584 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1101 09:50:27.034423 74584 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1101 09:50:40.596085 74584 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
I1101 09:50:40.596157 74584 kubeadm.go:319] [preflight] Running pre-flight checks
I1101 09:50:40.596234 74584 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1101 09:50:40.596323 74584 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1101 09:50:40.596395 74584 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1101 09:50:40.596501 74584 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1101 09:50:40.598485 74584 out.go:252] - Generating certificates and keys ...
I1101 09:50:40.598596 74584 kubeadm.go:319] [certs] Using existing ca certificate authority
I1101 09:50:40.598677 74584 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1101 09:50:40.598786 74584 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1101 09:50:40.598884 74584 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1101 09:50:40.598965 74584 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1101 09:50:40.599020 74584 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1101 09:50:40.599097 74584 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1101 09:50:40.599235 74584 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-086339 localhost] and IPs [192.168.39.58 127.0.0.1 ::1]
I1101 09:50:40.599294 74584 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1101 09:50:40.599486 74584 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-086339 localhost] and IPs [192.168.39.58 127.0.0.1 ::1]
I1101 09:50:40.599578 74584 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1101 09:50:40.599671 74584 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1101 09:50:40.599744 74584 kubeadm.go:319] [certs] Generating "sa" key and public key
I1101 09:50:40.599837 74584 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1101 09:50:40.599908 74584 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1101 09:50:40.599990 74584 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1101 09:50:40.600070 74584 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1101 09:50:40.600159 74584 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1101 09:50:40.600236 74584 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1101 09:50:40.600342 74584 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1101 09:50:40.600430 74584 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1101 09:50:40.601841 74584 out.go:252] - Booting up control plane ...
I1101 09:50:40.601953 74584 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1101 09:50:40.602064 74584 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1101 09:50:40.602160 74584 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1101 09:50:40.602298 74584 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1101 09:50:40.602458 74584 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1101 09:50:40.602614 74584 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1101 09:50:40.602706 74584 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1101 09:50:40.602764 74584 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1101 09:50:40.602925 74584 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1101 09:50:40.603084 74584 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1101 09:50:40.603174 74584 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002004831s
I1101 09:50:40.603300 74584 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1101 09:50:40.603404 74584 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.58:8443/livez
I1101 09:50:40.603516 74584 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1101 09:50:40.603630 74584 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1101 09:50:40.603719 74584 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.147708519s
I1101 09:50:40.603845 74584 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.505964182s
I1101 09:50:40.603957 74584 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.503174092s
I1101 09:50:40.604099 74584 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1101 09:50:40.604336 74584 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1101 09:50:40.604410 74584 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1101 09:50:40.604590 74584 kubeadm.go:319] [mark-control-plane] Marking the node addons-086339 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1101 09:50:40.604649 74584 kubeadm.go:319] [bootstrap-token] Using token: n6ooj1.g2r52lt9s64k7lzx
I1101 09:50:40.606300 74584 out.go:252] - Configuring RBAC rules ...
I1101 09:50:40.606413 74584 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1101 09:50:40.606488 74584 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1101 09:50:40.606682 74584 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1101 09:50:40.606839 74584 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1101 09:50:40.607006 74584 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1101 09:50:40.607114 74584 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1101 09:50:40.607229 74584 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1101 09:50:40.607269 74584 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1101 09:50:40.607307 74584 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1101 09:50:40.607312 74584 kubeadm.go:319]
I1101 09:50:40.607359 74584 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1101 09:50:40.607364 74584 kubeadm.go:319]
I1101 09:50:40.607423 74584 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1101 09:50:40.607428 74584 kubeadm.go:319]
I1101 09:50:40.607448 74584 kubeadm.go:319] mkdir -p $HOME/.kube
I1101 09:50:40.607512 74584 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1101 09:50:40.607591 74584 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1101 09:50:40.607600 74584 kubeadm.go:319]
I1101 09:50:40.607669 74584 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1101 09:50:40.607677 74584 kubeadm.go:319]
I1101 09:50:40.607717 74584 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1101 09:50:40.607722 74584 kubeadm.go:319]
I1101 09:50:40.607785 74584 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1101 09:50:40.607880 74584 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1101 09:50:40.607975 74584 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1101 09:50:40.607984 74584 kubeadm.go:319]
I1101 09:50:40.608100 74584 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1101 09:50:40.608199 74584 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1101 09:50:40.608211 74584 kubeadm.go:319]
I1101 09:50:40.608275 74584 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token n6ooj1.g2r52lt9s64k7lzx \
I1101 09:50:40.608412 74584 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:ad8ee8749587d4da67d76f75358688c9a611301f34b35f940a9e7fa320504c7a \
I1101 09:50:40.608438 74584 kubeadm.go:319] --control-plane
I1101 09:50:40.608444 74584 kubeadm.go:319]
I1101 09:50:40.608584 74584 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1101 09:50:40.608595 74584 kubeadm.go:319]
I1101 09:50:40.608701 74584 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token n6ooj1.g2r52lt9s64k7lzx \
I1101 09:50:40.608845 74584 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:ad8ee8749587d4da67d76f75358688c9a611301f34b35f940a9e7fa320504c7a
I1101 09:50:40.608868 74584 cni.go:84] Creating CNI manager for ""
I1101 09:50:40.608880 74584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1101 09:50:40.610610 74584 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1101 09:50:40.612071 74584 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1101 09:50:40.627372 74584 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1101 09:50:40.653117 74584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1101 09:50:40.653226 74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-086339 minikube.k8s.io/updated_at=2025_11_01T09_50_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=addons-086339 minikube.k8s.io/primary=true
I1101 09:50:40.653234 74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 09:50:40.841062 74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 09:50:40.841065 74584 ops.go:34] apiserver oom_adj: -16
I1101 09:50:41.341444 74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 09:50:41.841738 74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 09:50:42.341137 74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 09:50:42.841859 74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 09:50:43.341430 74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 09:50:43.842032 74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 09:50:44.341776 74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 09:50:44.842146 74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 09:50:45.342151 74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 09:50:45.471694 74584 kubeadm.go:1114] duration metric: took 4.818566134s to wait for elevateKubeSystemPrivileges
I1101 09:50:45.471741 74584 kubeadm.go:403] duration metric: took 18.811065248s to StartCluster
	I1101 09:50:45.471765   74584 settings.go:142] acquiring lock: {Name:mk26e3d3b2448df59827bb1be60cde1d117dbc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
                                                
                                                I1101 09:50:45.471940 74584 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 09:50:45.472382   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
                                                
                                                I1101 09:50:45.472671 74584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:50:45.472717   74584 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
                                                
                                                I1101 09:50:45.472765 74584 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1101 09:50:45.472916 74584 addons.go:70] Setting yakd=true in profile "addons-086339"
I1101 09:50:45.472917 74584 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-086339"
I1101 09:50:45.472959 74584 addons.go:239] Setting addon yakd=true in "addons-086339"
I1101 09:50:45.472963 74584 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-086339"
I1101 09:50:45.472976 74584 addons.go:70] Setting registry=true in profile "addons-086339"
I1101 09:50:45.472991 74584 addons.go:239] Setting addon registry=true in "addons-086339"
I1101 09:50:45.473004 74584 host.go:66] Checking if "addons-086339" exists ...
I1101 09:50:45.473010 74584 host.go:66] Checking if "addons-086339" exists ...
I1101 09:50:45.473012 74584 host.go:66] Checking if "addons-086339" exists ...
I1101 09:50:45.473003 74584 addons.go:70] Setting metrics-server=true in profile "addons-086339"
I1101 09:50:45.473051 74584 addons.go:70] Setting registry-creds=true in profile "addons-086339"
I1101 09:50:45.473068 74584 addons.go:239] Setting addon metrics-server=true in "addons-086339"
I1101 09:50:45.473084 74584 addons.go:239] Setting addon registry-creds=true in "addons-086339"
I1101 09:50:45.473121 74584 host.go:66] Checking if "addons-086339" exists ...
I1101 09:50:45.473144 74584 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-086339"
I1101 09:50:45.473150 74584 host.go:66] Checking if "addons-086339" exists ...
I1101 09:50:45.473175 74584 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-086339"
I1101 09:50:45.473203 74584 host.go:66] Checking if "addons-086339" exists ...
I1101 09:50:45.473564 74584 addons.go:70] Setting volcano=true in profile "addons-086339"
I1101 09:50:45.473589 74584 addons.go:239] Setting addon volcano=true in "addons-086339"
I1101 09:50:45.473622 74584 host.go:66] Checking if "addons-086339" exists ...
I1101 09:50:45.473737 74584 addons.go:70] Setting gcp-auth=true in profile "addons-086339"
I1101 09:50:45.473786 74584 mustload.go:66] Loading cluster: addons-086339
I1101 09:50:45.474010 74584 config.go:182] Loaded profile config "addons-086339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:50:45.474219 74584 addons.go:70] Setting ingress-dns=true in profile "addons-086339"
I1101 09:50:45.474254 74584 addons.go:239] Setting addon ingress-dns=true in "addons-086339"
I1101 09:50:45.474313 74584 host.go:66] Checking if "addons-086339" exists ...
I1101 09:50:45.472963 74584 config.go:182] Loaded profile config "addons-086339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:50:45.473011 74584 addons.go:70] Setting storage-provisioner=true in profile "addons-086339"
I1101 09:50:45.474667 74584 addons.go:239] Setting addon storage-provisioner=true in "addons-086339"
I1101 09:50:45.474685 74584 addons.go:70] Setting cloud-spanner=true in profile "addons-086339"
I1101 09:50:45.474699 74584 addons.go:239] Setting addon cloud-spanner=true in "addons-086339"
I1101 09:50:45.474703 74584 host.go:66] Checking if "addons-086339" exists ...
I1101 09:50:45.474721 74584 host.go:66] Checking if "addons-086339" exists ...
I1101 09:50:45.474993 74584 addons.go:70] Setting volumesnapshots=true in profile "addons-086339"
I1101 09:50:45.475011 74584 addons.go:239] Setting addon volumesnapshots=true in "addons-086339"
I1101 09:50:45.475031 74584 host.go:66] Checking if "addons-086339" exists ...
I1101 09:50:45.475344 74584 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-086339"
I1101 09:50:45.475368 74584 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-086339"
I1101 09:50:45.475372 74584 addons.go:70] Setting default-storageclass=true in profile "addons-086339"
I1101 09:50:45.475392 74584 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-086339"
I1101 09:50:45.475482 74584 addons.go:70] Setting ingress=true in profile "addons-086339"
I1101 09:50:45.475497 74584 addons.go:239] Setting addon ingress=true in "addons-086339"
I1101 09:50:45.475549 74584 host.go:66] Checking if "addons-086339" exists ...
I1101 09:50:45.474669 74584 addons.go:70] Setting inspektor-gadget=true in profile "addons-086339"
I1101 09:50:45.475789 74584 addons.go:239] Setting addon inspektor-gadget=true in "addons-086339"
I1101 09:50:45.475796 74584 out.go:179] * Verifying Kubernetes components...
I1101 09:50:45.475819 74584 host.go:66] Checking if "addons-086339" exists ...
I1101 09:50:45.474680 74584 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-086339"
I1101 09:50:45.476065 74584 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-086339"
I1101 09:50:45.476115 74584 host.go:66] Checking if "addons-086339" exists ...
I1101 09:50:45.477255 74584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 09:50:45.480031 74584 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
I1101 09:50:45.480031 74584 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1101 09:50:45.480033 74584 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
W1101 09:50:45.481113 74584 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1101 09:50:45.481446 74584 host.go:66] Checking if "addons-086339" exists ...
I1101 09:50:45.484726 74584 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1101 09:50:45.484753 74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1101 09:50:45.484938 74584 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1101 09:50:45.484960 74584 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1101 09:50:45.484966 74584 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1101 09:50:45.484973 74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1101 09:50:45.485125 74584 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1101 09:50:45.485153 74584 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1101 09:50:45.485273 74584 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-086339"
I1101 09:50:45.485691 74584 host.go:66] Checking if "addons-086339" exists ...
I1101 09:50:45.485920 74584 addons.go:239] Setting addon default-storageclass=true in "addons-086339"
I1101 09:50:45.485962 74584 host.go:66] Checking if "addons-086339" exists ...
I1101 09:50:45.487450 74584 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
I1101 09:50:45.487459 74584 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
I1101 09:50:45.487484 74584 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1101 09:50:45.487497 74584 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1101 09:50:45.487517 74584 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1101 09:50:45.487560 74584 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1101 09:50:45.487563 74584 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
I1101 09:50:45.488316 74584 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1101 09:50:45.488329 74584 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1101 09:50:45.488348 74584 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1101 09:50:45.489625 74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1101 09:50:45.489651 74584 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1101 09:50:45.489699 74584 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1101 09:50:45.489902 74584 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1101 09:50:45.490208 74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1101 09:50:45.490224 74584 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1101 09:50:45.490262 74584 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
I1101 09:50:45.490750 74584 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
I1101 09:50:45.491163 74584 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1101 09:50:45.491557 74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1101 09:50:45.491173 74584 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1101 09:50:45.491207 74584 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1101 09:50:45.491713 74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1101 09:50:45.491208 74584 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1101 09:50:45.491791 74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1101 09:50:45.491917 74584 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
I1101 09:50:45.492081 74584 out.go:179] - Using image docker.io/registry:3.0.0
I1101 09:50:45.492774 74584 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1101 09:50:45.493050 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:45.493676 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:45.494048 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:45.494216 74584 out.go:179] - Using image docker.io/busybox:stable
I1101 09:50:45.494271 74584 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1101 09:50:45.494283 74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 09:50:45.494189   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:45.494412 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.495222   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:45.495346 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.495450   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
                                                
                                                I1101 09:50:45.495550 74584 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1101 09:50:45.495608 74584 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
I1101 09:50:45.495670 74584 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1101 09:50:45.495688 74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 09:50:45.495797   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:45.495840 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.496406   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
                                                
                                                	I1101 09:50:45.496819   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
                                                
                                                I1101 09:50:45.497603 74584 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1101 09:50:45.497622 74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1101 09:50:45.498607 74584 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1101 09:50:45.500140 74584 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1101 09:50:45.500156 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:45.500745 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:45.500905 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.501448   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:45.501490 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:45.501945 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:45.502137 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.502129   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
                                                
                                                	I1101 09:50:45.502357   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:45.502386 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:45.502479 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.502618   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:45.502659 74584 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 09:50:45.502626   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
                                                
                                                I1101 09:50:45.502671 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:45.502621 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.503336   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:45.503381 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.503456   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:45.503481 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:45.503494 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.503740   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
                                                
                                                	I1101 09:50:45.503831   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:45.503858 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.503858   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
                                                
                                                	I1101 09:50:45.503886   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
                                                
                                                	I1101 09:50:45.504294   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
                                                
                                                	I1101 09:50:45.504670   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:45.504706 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
I1101 09:50:45.504708 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.504783   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:45.504812 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.504989   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
                                                
                                                	I1101 09:50:45.505241   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
                                                
                                                I1101 09:50:45.505275 74584 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 09:50:45.505416   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:45.505439 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.505646   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
                                                
                                                I1101 09:50:45.505919 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.506301   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:45.506330 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.506479   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
                                                
                                                I1101 09:50:45.506657 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.507207   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:45.507243 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.507456   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
                                                
                                                I1101 09:50:45.507843 74584 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1101 09:50:45.509235 74584 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1101 09:50:45.509251 74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1101 09:50:45.511923 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.512313   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:45.512339 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.512478   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
                                                
                                                W1101 09:50:45.863592 74584 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56024->192.168.39.58:22: read: connection reset by peer
I1101 09:50:45.863626 74584 retry.go:31] will retry after 353.468022ms: ssh: handshake failed: read tcp 192.168.39.1:56024->192.168.39.58:22: read: connection reset by peer
W1101 09:50:45.863706 74584 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56030->192.168.39.58:22: read: connection reset by peer
I1101 09:50:45.863718 74584 retry.go:31] will retry after 366.435822ms: ssh: handshake failed: read tcp 192.168.39.1:56030->192.168.39.58:22: read: connection reset by peer
I1101 09:50:46.204700 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1101 09:50:46.344397 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1101 09:50:46.364416 74584 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1101 09:50:46.364443 74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1101 09:50:46.382914 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1101 09:50:46.401116 74584 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1101 09:50:46.401152 74584 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1101 09:50:46.499674 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1101 09:50:46.525387 74584 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1101 09:50:46.525422 74584 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1101 09:50:46.528653 74584 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:50:46.528683 74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1101 09:50:46.537039 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1101 09:50:46.585103 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1101 09:50:46.700077 74584 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1101 09:50:46.700117 74584 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1101 09:50:46.802990 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1101 09:50:46.845193 74584 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1101 09:50:46.845228 74584 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1101 09:50:46.948887 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:50:47.114091 74584 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1101 09:50:47.114126 74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1101 09:50:47.173908 74584 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.701178901s)
I1101 09:50:47.173921 74584 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.696642998s)
I1101 09:50:47.173999 74584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:50:47.174095   74584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
                                                
                                                I1101 09:50:47.203736 74584 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1101 09:50:47.203782 74584 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1101 09:50:47.327504 74584 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1101 09:50:47.327541 74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1101 09:50:47.447307 74584 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1101 09:50:47.447333 74584 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1101 09:50:47.479289 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1101 09:50:47.516143 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1101 09:50:47.537776 74584 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1101 09:50:47.537808 74584 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1101 09:50:47.602456 74584 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1101 09:50:47.602492 74584 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1101 09:50:47.634301 74584 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1101 09:50:47.634334 74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1101 09:50:47.666382 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1101 09:50:47.896414 74584 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1101 09:50:47.896454 74584 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1101 09:50:48.070881 74584 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1101 09:50:48.070918 74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1101 09:50:48.088172 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1101 09:50:48.112581 74584 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1101 09:50:48.112615 74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1101 09:50:48.384804 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.180058223s)
I1101 09:50:48.433222 74584 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1101 09:50:48.433251 74584 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1101 09:50:48.570103 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1101 09:50:48.712201 74584 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1101 09:50:48.712239 74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1101 09:50:48.761409 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.41696863s)
I1101 09:50:49.019503 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.636542693s)
I1101 09:50:49.055833 74584 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1101 09:50:49.055864 74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1101 09:50:49.130302 74584 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1101 09:50:49.130330 74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1101 09:50:49.321757 74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1101 09:50:49.321783 74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1101 09:50:49.571119 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1101 09:50:49.804708 74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1101 09:50:49.804738 74584 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1101 09:50:49.962509 74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1101 09:50:49.962544 74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1101 09:50:50.281087 74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1101 09:50:50.281117 74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1101 09:50:50.772055 74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1101 09:50:50.772080 74584 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1101 09:50:51.239409 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1101 09:50:52.962797 74584 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1101 09:50:52.966311 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:52.966764   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:52.966789 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:52.966934   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
                                                
                                                I1101 09:50:53.227038 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.727328057s)
I1101 09:50:53.227151 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.69006708s)
I1101 09:50:53.227189 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.642046598s)
I1101 09:50:53.227242 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.424224705s)
I1101 09:50:53.376728 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.427801852s)
W1101 09:50:53.376771 74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget created
serviceaccount/gadget created
configmap/gadget created
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
role.rbac.authorization.k8s.io/gadget-role created
rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
daemonset.apps/gadget created
stderr:
Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:50:53.376826 74584 retry.go:31] will retry after 359.696332ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget created
serviceaccount/gadget created
configmap/gadget created
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
role.rbac.authorization.k8s.io/gadget-role created
rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
daemonset.apps/gadget created
stderr:
Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:50:53.376871 74584 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.202843079s)
	I1101 09:50:53.376921   74584 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.202805311s)
                                                
                                                	I1101 09:50:53.376950   74584 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
                                                
                                                I1101 09:50:53.377909 74584 node_ready.go:35] waiting up to 6m0s for node "addons-086339" to be "Ready" ...
I1101 09:50:53.462748 74584 node_ready.go:49] node "addons-086339" is "Ready"
I1101 09:50:53.462778 74584 node_ready.go:38] duration metric: took 84.807458ms for node "addons-086339" to be "Ready" ...
I1101 09:50:53.462793 74584 api_server.go:52] waiting for apiserver process to appear ...
I1101 09:50:53.462847 74584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 09:50:53.534003 74584 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1101 09:50:53.650576 74584 addons.go:239] Setting addon gcp-auth=true in "addons-086339"
I1101 09:50:53.650630 74584 host.go:66] Checking if "addons-086339" exists ...
I1101 09:50:53.652687 74584 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1101 09:50:53.655511 74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:53.655896   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
                                                
                                                I1101 09:50:53.655920 74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:53.656060   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
                                                
                                                I1101 09:50:53.737577 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:50:53.969325 74584 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-086339" context rescaled to 1 replicas
I1101 09:50:55.148780 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.669443662s)
I1101 09:50:55.148826 74584 addons.go:480] Verifying addon ingress=true in "addons-086339"
I1101 09:50:55.148852 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.632675065s)
I1101 09:50:55.148956 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.482535978s)
I1101 09:50:55.149057 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.060852546s)
I1101 09:50:55.149064 74584 addons.go:480] Verifying addon registry=true in "addons-086339"
I1101 09:50:55.149094 74584 addons.go:480] Verifying addon metrics-server=true in "addons-086339"
I1101 09:50:55.149162 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.579011593s)
I1101 09:50:55.150934 74584 out.go:179] * Verifying ingress addon...
I1101 09:50:55.150992 74584 out.go:179] * Verifying registry addon...
I1101 09:50:55.151019 74584 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-086339 service yakd-dashboard -n yakd-dashboard
I1101 09:50:55.152636 74584 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1101 09:50:55.152833 74584 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1101 09:50:55.236576 74584 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1101 09:50:55.236603 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:50:55.236704 74584 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1101 09:50:55.236726 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:50:55.608860 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.037686923s)
W1101 09:50:55.608910 74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1101 09:50:55.608932 74584 retry.go:31] will retry after 233.800882ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1101 09:50:55.697978 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:50:55.698030 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:50:55.843247 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1101 09:50:56.241749 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:50:56.241968 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:50:56.550655 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.311175816s)
I1101 09:50:56.550716 74584 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-086339"
I1101 09:50:56.550663 74584 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.087794232s)
I1101 09:50:56.550810 74584 api_server.go:72] duration metric: took 11.078058308s to wait for apiserver process to appear ...
I1101 09:50:56.550891 74584 api_server.go:88] waiting for apiserver healthz status ...
I1101 09:50:56.550935 74584 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
I1101 09:50:56.552309 74584 out.go:179] * Verifying csi-hostpath-driver addon...
I1101 09:50:56.554454 74584 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1101 09:50:56.566874 74584 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
ok
I1101 09:50:56.569220 74584 api_server.go:141] control plane version: v1.34.1
I1101 09:50:56.569247 74584 api_server.go:131] duration metric: took 18.347182ms to wait for apiserver health ...
I1101 09:50:56.569258 74584 system_pods.go:43] waiting for kube-system pods to appear ...
I1101 09:50:56.586752 74584 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1101 09:50:56.586776 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:50:56.587214 74584 system_pods.go:59] 20 kube-system pods found
I1101 09:50:56.587266 74584 system_pods.go:61] "amd-gpu-device-plugin-lr4lw" [bee1e3ae-5d43-4b43-a348-0e04ec066093] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1101 09:50:56.587277 74584 system_pods.go:61] "coredns-66bc5c9577-5v6h7" [ff58ca9c-6949-4ab8-b8ff-8be8e7b75757] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1101 09:50:56.587289 74584 system_pods.go:61] "coredns-66bc5c9577-vsbrs" [c3a65dae-82f4-4f33-b460-fa45a39b3342] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1101 09:50:56.587297 74584 system_pods.go:61] "csi-hostpath-attacher-0" [50e03a30-f2e9-4ec1-ba85-6da2654030c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1101 09:50:56.587304 74584 system_pods.go:61] "csi-hostpath-resizer-0" [d2c565f0-80a3-4b2d-a99b-edc1d7ae4fe2] Pending
I1101 09:50:56.587318 74584 system_pods.go:61] "csi-hostpathplugin-z7vjp" [96e87cd6-068d-40af-9966-b875b9a7629e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1101 09:50:56.587325 74584 system_pods.go:61] "etcd-addons-086339" [f17e5eab-51c0-409a-9bb3-3cb5e71200fd] Running
I1101 09:50:56.587336 74584 system_pods.go:61] "kube-apiserver-addons-086339" [51b3d29f-af5e-441a-b3c0-754241fc92bc] Running
I1101 09:50:56.587343 74584 system_pods.go:61] "kube-controller-manager-addons-086339" [62d54b81-f6bc-4bdc-bd22-c8a6fc39a043] Running
I1101 09:50:56.587352 74584 system_pods.go:61] "kube-ingress-dns-minikube" [e328fd3e-a381-414d-ba99-1aa6f7f40585] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1101 09:50:56.587357 74584 system_pods.go:61] "kube-proxy-7fck9" [a834adcc-b0ec-4cad-8944-bea90a627787] Running
I1101 09:50:56.587365 74584 system_pods.go:61] "kube-scheduler-addons-086339" [4db76834-5184-4a83-a228-35e83abc8c9d] Running
I1101 09:50:56.587372 74584 system_pods.go:61] "metrics-server-85b7d694d7-6lx9r" [c4e44e90-7d77-43fc-913f-f26877e52760] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1101 09:50:56.587378 74584 system_pods.go:61] "nvidia-device-plugin-daemonset-jh2xq" [0a9234e2-8d6a-4110-86be-ff05f9be1a29] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1101 09:50:56.587387 74584 system_pods.go:61] "registry-6b586f9694-8zvc5" [23d65f21-71d0-4da4-8f2f-5b59f93f9085] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1101 09:50:56.587395 74584 system_pods.go:61] "registry-creds-764b6fb674-ztjtq" [ae641ce9-b248-46a3-8e01-9d25e8d29825] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1101 09:50:56.587408 74584 system_pods.go:61] "registry-proxy-4p4n9" [73d260fc-8c68-439c-a460-208cdb29b271] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1101 09:50:56.587416 74584 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4kwxj" [e301a0c5-17dc-43be-9fd5-c14b76c1b92c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1101 09:50:56.587429 74584 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wzgp7" [4c770fa7-174c-43ab-ac63-635b19152843] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1101 09:50:56.587437 74584 system_pods.go:61] "storage-provisioner" [4c394064-33ff-4fd0-a4bc-afb948952ac6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1101 09:50:56.587448 74584 system_pods.go:74] duration metric: took 18.182475ms to wait for pod list to return data ...
I1101 09:50:56.587460 74584 default_sa.go:34] waiting for default service account to be created ...
I1101 09:50:56.596967 74584 default_sa.go:45] found service account: "default"
I1101 09:50:56.596990 74584 default_sa.go:55] duration metric: took 9.524828ms for default service account to be created ...
I1101 09:50:56.596999 74584 system_pods.go:116] waiting for k8s-apps to be running ...
I1101 09:50:56.613956 74584 system_pods.go:86] 20 kube-system pods found
I1101 09:50:56.613988 74584 system_pods.go:89] "amd-gpu-device-plugin-lr4lw" [bee1e3ae-5d43-4b43-a348-0e04ec066093] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1101 09:50:56.613995 74584 system_pods.go:89] "coredns-66bc5c9577-5v6h7" [ff58ca9c-6949-4ab8-b8ff-8be8e7b75757] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1101 09:50:56.614003 74584 system_pods.go:89] "coredns-66bc5c9577-vsbrs" [c3a65dae-82f4-4f33-b460-fa45a39b3342] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1101 09:50:56.614009 74584 system_pods.go:89] "csi-hostpath-attacher-0" [50e03a30-f2e9-4ec1-ba85-6da2654030c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1101 09:50:56.614014 74584 system_pods.go:89] "csi-hostpath-resizer-0" [d2c565f0-80a3-4b2d-a99b-edc1d7ae4fe2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1101 09:50:56.614020 74584 system_pods.go:89] "csi-hostpathplugin-z7vjp" [96e87cd6-068d-40af-9966-b875b9a7629e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1101 09:50:56.614023 74584 system_pods.go:89] "etcd-addons-086339" [f17e5eab-51c0-409a-9bb3-3cb5e71200fd] Running
I1101 09:50:56.614028 74584 system_pods.go:89] "kube-apiserver-addons-086339" [51b3d29f-af5e-441a-b3c0-754241fc92bc] Running
I1101 09:50:56.614033 74584 system_pods.go:89] "kube-controller-manager-addons-086339" [62d54b81-f6bc-4bdc-bd22-c8a6fc39a043] Running
I1101 09:50:56.614040 74584 system_pods.go:89] "kube-ingress-dns-minikube" [e328fd3e-a381-414d-ba99-1aa6f7f40585] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1101 09:50:56.614045 74584 system_pods.go:89] "kube-proxy-7fck9" [a834adcc-b0ec-4cad-8944-bea90a627787] Running
I1101 09:50:56.614051 74584 system_pods.go:89] "kube-scheduler-addons-086339" [4db76834-5184-4a83-a228-35e83abc8c9d] Running
I1101 09:50:56.614058 74584 system_pods.go:89] "metrics-server-85b7d694d7-6lx9r" [c4e44e90-7d77-43fc-913f-f26877e52760] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1101 09:50:56.614073 74584 system_pods.go:89] "nvidia-device-plugin-daemonset-jh2xq" [0a9234e2-8d6a-4110-86be-ff05f9be1a29] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1101 09:50:56.614089 74584 system_pods.go:89] "registry-6b586f9694-8zvc5" [23d65f21-71d0-4da4-8f2f-5b59f93f9085] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1101 09:50:56.614095 74584 system_pods.go:89] "registry-creds-764b6fb674-ztjtq" [ae641ce9-b248-46a3-8e01-9d25e8d29825] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1101 09:50:56.614100 74584 system_pods.go:89] "registry-proxy-4p4n9" [73d260fc-8c68-439c-a460-208cdb29b271] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1101 09:50:56.614105 74584 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4kwxj" [e301a0c5-17dc-43be-9fd5-c14b76c1b92c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1101 09:50:56.614114 74584 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wzgp7" [4c770fa7-174c-43ab-ac63-635b19152843] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1101 09:50:56.614118 74584 system_pods.go:89] "storage-provisioner" [4c394064-33ff-4fd0-a4bc-afb948952ac6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1101 09:50:56.614126 74584 system_pods.go:126] duration metric: took 17.122448ms to wait for k8s-apps to be running ...
I1101 09:50:56.614136 74584 system_svc.go:44] waiting for kubelet service to be running ....
I1101 09:50:56.614196 74584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1101 09:50:56.662305 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:50:56.676451 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:50:57.009640 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.27202291s)
W1101 09:50:57.009684 74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:50:57.009709 74584 retry.go:31] will retry after 295.092784ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:50:57.009722 74584 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.357005393s)
I1101 09:50:57.011440 74584 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
I1101 09:50:57.012826 74584 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1101 09:50:57.014068 74584 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1101 09:50:57.014084 74584 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1101 09:50:57.060410 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:50:57.092501 74584 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1101 09:50:57.092526 74584 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1101 09:50:57.163456 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:50:57.166739 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:50:57.235815 74584 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1101 09:50:57.235844 74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1101 09:50:57.305656 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:50:57.336319 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1101 09:50:57.561645 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:50:57.662574 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:50:57.663877 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:50:58.063249 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:50:58.157346 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:50:58.162591 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:50:58.566038 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:50:58.574812 74584 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.96059055s)
I1101 09:50:58.574848 74584 system_svc.go:56] duration metric: took 1.960707525s WaitForService to wait for kubelet
I1101 09:50:58.574856 74584 kubeadm.go:587] duration metric: took 13.102108035s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1101 09:50:58.574874 74584 node_conditions.go:102] verifying NodePressure condition ...
I1101 09:50:58.575108 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.73180936s)
I1101 09:50:58.586405 74584 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1101 09:50:58.586436 74584 node_conditions.go:123] node cpu capacity is 2
I1101 09:50:58.586457 74584 node_conditions.go:105] duration metric: took 11.577545ms to run NodePressure ...
I1101 09:50:58.586472 74584 start.go:242] waiting for startup goroutines ...
I1101 09:50:58.664635 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:50:58.665016 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:50:59.063972 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:50:59.170042 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:50:59.176798 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:50:59.577259 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:50:59.664063 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:50:59.665180 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:00.063306 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:00.173864 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:00.174338 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.868634982s)
W1101 09:51:00.174389 74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:51:00.174423 74584 retry.go:31] will retry after 509.276592ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:51:00.174461 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.838092131s)
I1101 09:51:00.175590 74584 addons.go:480] Verifying addon gcp-auth=true in "addons-086339"
I1101 09:51:00.176082 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:00.177144 74584 out.go:179] * Verifying gcp-auth addon...
I1101 09:51:00.179153 74584 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1101 09:51:00.185078 74584 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1101 09:51:00.185104 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:00.569905 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:00.666711 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:00.668288 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:00.684564 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:51:00.685802 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:01.058804 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:01.162413 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:01.162519 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:01.184967 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:01.561792 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:01.660578 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:01.660604 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:01.687510 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:02.048703 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.364096236s)
W1101 09:51:02.048744 74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:51:02.048770 74584 retry.go:31] will retry after 922.440306ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:51:02.058033 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:02.156454 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:02.156517 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:02.184626 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:02.560632 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:02.663377 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:02.663392 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:02.682802 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:02.972204 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:51:03.066417 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:03.162498 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:03.164331 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:03.185238 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:03.558965 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:03.660685 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:03.662797 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:03.683857 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:03.988155 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.015906584s)
W1101 09:51:03.988197 74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:51:03.988221 74584 retry.go:31] will retry after 1.512024934s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:51:04.059661 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:04.158989 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:04.159171 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:04.184262 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:04.559848 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:04.665219 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:04.666152 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:04.684684 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:05.059373 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:05.157706 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:05.158120 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:05.184998 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:05.500748 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:51:05.560240 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:05.659023 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:05.660031 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:05.684729 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:06.059474 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:06.157196 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:06.157311 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:06.182088 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
W1101 09:51:06.269741 74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:51:06.269786 74584 retry.go:31] will retry after 2.204116799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:51:06.559209 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:06.657408 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:06.657492 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:06.683284 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:07.059744 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:07.160264 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:07.160549 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:07.183753 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:07.558791 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:07.658454 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:07.662675 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:07.684198 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:08.065874 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:08.160732 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:08.161495 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:08.182870 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:08.474158 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:51:08.564218 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:08.659007 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:08.661853 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:08.684365 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:09.062466 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:09.159228 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:09.159372 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:09.183927 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:09.561230 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:09.664415 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:09.666273 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:09.684865 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:09.700010 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.225813085s)
W1101 09:51:09.700056 74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:51:09.700081 74584 retry.go:31] will retry after 3.484047661s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:51:10.059617 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:10.156799 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:10.156883 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:10.183999 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:10.560483 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:10.661603 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:10.661780 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:10.686351 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:11.081718 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:11.188353 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:11.188507 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:11.188624 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:11.558634 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:11.660662 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:11.663221 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:11.683762 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:12.059387 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:12.156602 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:12.156961 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:12.183069 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:12.558360 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:12.657779 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:12.659195 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:12.684167 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:13.059425 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:13.159273 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:13.159720 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:13.182662 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:13.184729 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:51:13.558837 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:13.659127 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:13.659431 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:13.682290 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
W1101 09:51:14.013627 74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:51:14.013674 74584 retry.go:31] will retry after 3.772853511s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:51:14.060473 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:14.168480 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:14.168525 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:14.195048 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:14.559885 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:14.655949 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:14.656674 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:14.682561 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:15.059773 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:15.158683 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:15.158997 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:15.185198 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:15.559183 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:15.657568 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:15.657667 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:15.683337 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:16.059611 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:16.156727 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:16.158488 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:16.182596 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:16.558923 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:16.656902 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:16.657753 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:16.683813 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:17.059799 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:17.157794 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:17.158058 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:17.183320 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:17.562511 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:17.661802 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:17.663610 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:17.683753 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:17.786898 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:51:18.062486 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:18.165903 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:18.166305 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:18.185036 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:18.563358 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:18.661780 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:18.664168 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:18.686501 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:19.062933 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:19.159993 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.373047606s)
W1101 09:51:19.160054 74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:51:19.160090 74584 retry.go:31] will retry after 8.062833615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:51:19.160265 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:19.161792 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:19.187129 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:19.562165 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:19.662490 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:19.662887 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:19.685224 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:20.062452 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:20.158649 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:20.158963 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:20.185553 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:20.560324 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:20.663470 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:20.664773 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:20.687217 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:21.058336 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:21.158067 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:21.158764 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:21.184179 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:21.562709 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:21.660636 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:21.661331 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:21.683251 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:22.058468 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:22.158449 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:22.161441 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:22.183647 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:22.559209 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:22.657596 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:22.658067 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:22.684022 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:23.060587 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:23.159313 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:23.160492 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:23.183233 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:23.577231 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:23.658412 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:23.661233 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:23.684740 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:24.059042 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:24.157394 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:24.158911 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:24.182864 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:24.559933 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:24.657638 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:24.661214 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:24.686127 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:25.059953 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:25.158151 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:25.160939 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:25.183657 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:25.565339 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:25.663990 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:25.664201 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:25.683465 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:26.059376 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:26.158991 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:26.159088 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:26.184884 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:26.559386 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:26.657922 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:26.660583 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:26.683688 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:27.058939 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:27.156101 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:27.156998 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:27.182909 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:27.224025 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:51:27.562477 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:27.660651 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:27.662259 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:27.681905 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:28.059984 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:28.160493 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:28.162286 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:28.186135 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
W1101 09:51:28.200979 74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:51:28.201029 74584 retry.go:31] will retry after 10.395817371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:51:28.558989 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:28.657430 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:28.660330 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:28.683885 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:29.061934 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:29.157765 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:29.157917 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:29.184278 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:29.560897 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:29.657774 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:29.657838 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:29.683106 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:30.059693 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:30.160732 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:51:30.166378 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:30.265635 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:30.558787 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:30.656060 74584 kapi.go:107] duration metric: took 35.503223323s to wait for kubernetes.io/minikube-addons=registry ...
I1101 09:51:30.656373 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:30.682215 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:31.059187 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:31.157561 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:31.258067 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:31.560106 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:31.657305 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:31.683226 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:32.059058 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:32.158395 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:32.182943 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:32.559674 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:32.660135 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:32.684028 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:33.059220 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:33.159029 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:33.189054 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:33.699380 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:33.699471 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:33.700370 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:34.059307 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:34.158409 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:34.189459 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:34.558736 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:34.656864 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:34.682855 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:35.058847 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:35.156770 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:35.182411 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:35.559605 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:35.657060 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:35.682886 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:36.059230 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:36.158265 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:36.185067 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:36.562462 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:36.657785 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:36.684734 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:37.059270 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:37.156638 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:37.184172 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:37.558438 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:37.656955 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:37.684255 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:38.061827 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:38.157365 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:38.182685 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:38.560831 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:38.597843 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:51:38.656804 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:38.686009 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:39.061543 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:39.158425 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:39.183760 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:39.559306 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:39.657197 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:39.684893 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:39.748441 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.150549422s)
W1101 09:51:39.748504 74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:51:39.748545 74584 retry.go:31] will retry after 20.354212059s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:51:40.091278 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:40.159135 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:40.189976 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:40.561293 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:40.657506 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:40.682812 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:41.059036 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:41.157077 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:41.183024 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:41.560657 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:41.662059 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:41.686139 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:42.059712 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:42.158078 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:42.184717 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:42.558428 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:42.657474 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:42.682401 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:43.061067 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:43.159023 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:43.182945 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:43.559721 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:43.658905 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:43.683665 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:44.059768 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:44.156686 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:44.182520 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:44.558486 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:44.659410 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:44.686714 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:45.059691 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:45.161012 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:45.186846 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:45.566991 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:45.661771 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:45.683563 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:46.061274 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:46.157945 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:46.184842 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:46.559462 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:46.659702 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:46.682680 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:47.058242 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:47.159894 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:47.185416 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:47.561755 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:47.660011 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:47.683518 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:48.061815 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:48.158606 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:48.186741 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:48.562551 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:48.660513 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:48.683374 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:49.061955 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:49.158516 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:49.182835 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:49.558347 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:49.660756 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:49.685651 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:50.059457 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:50.161169 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:50.185382 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:50.560490 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:50.667931 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:50.691744 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:51.060229 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:51.163272 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:51.185468 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:51.561847 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:51.657559 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:51.684472 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:52.065897 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:52.165405 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:52.184183 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:52.558429 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:52.659763 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:52.687124 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:53.060334 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:53.159793 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:53.260599 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:53.836679 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:53.844731 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:53.846382 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:54.061169 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:54.160164 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:54.184130 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:54.559624 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:54.660771 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:54.683387 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:55.060182 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:55.158098 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:55.184607 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:55.568135 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:55.666901 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:55.688352 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:56.061312 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:56.160289 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:56.183561 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:56.559442 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:56.666114 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:56.686070 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:57.059598 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:57.157253 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:57.184083 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:57.559370 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:57.657282 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:57.684369 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:58.059645 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:58.160950 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:58.183605 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:58.559980 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:58.660720 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:58.682723 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:59.061658 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:59.161368 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:59.186554 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:51:59.562493 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:51:59.658000 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:51:59.686396 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:00.059261 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:00.103310 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:52:00.158774 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:52:00.183231 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:00.562324 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:00.659611 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:52:00.682795 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:01.061408 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:01.158866 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:52:01.188200 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:01.344727 74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.241365643s)
W1101 09:52:01.344783 74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:52:01.344810 74584 retry.go:31] will retry after 24.70836809s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:52:01.558702 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:01.657288 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:52:01.683224 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:02.061177 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:02.158031 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:52:02.185134 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:02.559729 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:02.661884 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:52:02.684276 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:03.058102 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:03.159115 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:52:03.184840 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:03.559718 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:03.658993 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:52:03.682755 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:04.061600 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:04.157504 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:52:04.182206 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:04.558833 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:04.658122 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:52:04.690795 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:05.060282 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:05.159649 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:52:05.182512 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:05.558584 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:05.657372 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:52:05.682747 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:06.059347 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:06.156954 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:52:06.184088 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:06.559677 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:06.657737 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:52:06.683063 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:07.058922 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:07.156647 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:52:07.183210 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:07.559741 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:07.656366 74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:52:07.684732 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:08.060305 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:08.161326 74584 kapi.go:107] duration metric: took 1m13.008685899s to wait for app.kubernetes.io/name=ingress-nginx ...
I1101 09:52:08.184485 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:08.563527 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:08.684225 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:09.062454 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:09.183134 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:09.559703 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:09.683034 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:10.059517 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:10.183595 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:10.559051 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:10.684292 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:11.060725 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:11.184057 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:11.560407 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:11.684061 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:12.059623 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:12.338951 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:12.563238 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:12.687086 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:52:13.065805 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:13.186970 74584 kapi.go:107] duration metric: took 1m13.007813603s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1101 09:52:13.188654 74584 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-086339 cluster.
I1101 09:52:13.190102 74584 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1101 09:52:13.191551 74584 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1101 09:52:13.561959 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:14.059590 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:14.558397 74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:52:15.059526 74584 kapi.go:107] duration metric: took 1m18.505070405s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1101 09:52:26.053439 74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
W1101 09:52:26.787218 74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
W1101 09:52:26.787354 74584 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
]
I1101 09:52:26.789142 74584 out.go:179] * Enabled addons: default-storageclass, registry-creds, amd-gpu-device-plugin, storage-provisioner, nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
I1101 09:52:26.790527 74584 addons.go:515] duration metric: took 1m41.317758805s for enable addons: enabled=[default-storageclass registry-creds amd-gpu-device-plugin storage-provisioner nvidia-device-plugin cloud-spanner ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
I1101 09:52:26.790585 74584 start.go:247] waiting for cluster config update ...
I1101 09:52:26.790606 74584 start.go:256] writing updated cluster config ...
I1101 09:52:26.790869 74584 ssh_runner.go:195] Run: rm -f paused
I1101 09:52:26.797220 74584 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1101 09:52:26.802135 74584 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vsbrs" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:52:26.807671 74584 pod_ready.go:94] pod "coredns-66bc5c9577-vsbrs" is "Ready"
I1101 09:52:26.807696 74584 pod_ready.go:86] duration metric: took 5.533544ms for pod "coredns-66bc5c9577-vsbrs" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:52:26.809972 74584 pod_ready.go:83] waiting for pod "etcd-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:52:26.815396 74584 pod_ready.go:94] pod "etcd-addons-086339" is "Ready"
I1101 09:52:26.815421 74584 pod_ready.go:86] duration metric: took 5.421578ms for pod "etcd-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:52:26.818352 74584 pod_ready.go:83] waiting for pod "kube-apiserver-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:52:26.823369 74584 pod_ready.go:94] pod "kube-apiserver-addons-086339" is "Ready"
I1101 09:52:26.823403 74584 pod_ready.go:86] duration metric: took 5.02397ms for pod "kube-apiserver-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:52:26.825247 74584 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:52:27.201328 74584 pod_ready.go:94] pod "kube-controller-manager-addons-086339" is "Ready"
I1101 09:52:27.201355 74584 pod_ready.go:86] duration metric: took 376.08311ms for pod "kube-controller-manager-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:52:27.402263 74584 pod_ready.go:83] waiting for pod "kube-proxy-7fck9" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:52:27.802591 74584 pod_ready.go:94] pod "kube-proxy-7fck9" is "Ready"
I1101 09:52:27.802625 74584 pod_ready.go:86] duration metric: took 400.328354ms for pod "kube-proxy-7fck9" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:52:28.002425 74584 pod_ready.go:83] waiting for pod "kube-scheduler-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:52:28.401943 74584 pod_ready.go:94] pod "kube-scheduler-addons-086339" is "Ready"
I1101 09:52:28.401969 74584 pod_ready.go:86] duration metric: took 399.516912ms for pod "kube-scheduler-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:52:28.401979 74584 pod_ready.go:40] duration metric: took 1.604730154s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1101 09:52:28.446357 74584 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
I1101 09:52:28.448281 74584 out.go:179] * Done! kubectl is now configured to use "addons-086339" cluster and "default" namespace by default
==> CRI-O <==
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.493762476Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991272493732721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:511388,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8da11cc9-e3e6-42b7-8b25-9fbef0b5863c name=/runtime.v1.ImageService/ImageFsInfo
                                                
                                                	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.495751222Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a43b13bd-cd83-453e-bfec-194e48df3256 name=/runtime.v1.RuntimeService/ListContainers
                                                
                                                Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.495886828Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a43b13bd-cd83-453e-bfec-194e48df3256 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.496396001Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8f9ab035f10b883f89c331d67218f109856992b9b069efdae0a16a908bf656d,PodSandboxId:ecbb6e0269dbe5206ee40e41cf202e8a0f1fc8985220bca67dd2abcee664753f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761990753121450389,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bd0f0b90-ebd1-434e-86db-7717f59bb0b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
                                                
                                                minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60f64f1e1264248ca86d3e8ea17c90635c9d479311fe8d5ea622b661f0068bd6,PodSandboxId:b2e63f129e7cad5f03427260dc3589db4cecd4b45329bdb1e1023738a84b3985,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761990727410551183,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-g7dks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4165ee4-5d09-49d4-a0c1-f663b2084a0d,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.po
                                                
                                                rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4b410307ca2339c52c3d12763e6b754600ea116c26c1df56bd5b04a1a68661d,PodSandboxId:48e637e86e4493f303489a52457e2b59ba63b33cc608f38bb21f8e651a9e1571,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a
                                                
                                                7bf2,State:CONTAINER_EXITED,CreatedAt:1761990712169152493,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dw6sn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51ccb987-b8f5-42f1-af70-9d22dd5ca2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764b375ef3791c087e6ad4b248126f0c7c98e6065f6bd3c282044dcc212ac1f4,PodSandboxId:1c83f726dda755d3ed283799c973eeabdf1da173f6f6ce420a3d047efb307a42,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba
                                                
                                                112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990709662174283,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d7qkm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0111b6b5-409d-4b18-a391-db0a0fbe7882,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccff636c81da778123aaba73ca1c6a96114c3d9b455724fc184ea7051b61a16,PodSandboxId:ae1c1b106a1ce6fe7752079dd99dd3da08ea5c8417f73c7d2db66281343dd8bc,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
                                                
                                                eRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761990706331554116,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-p2brt,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 7d9684ff-4d35-4cab-b655-c3fcbbfaa552,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5d4912957560b7536a6c330e413b78d8074dab0b202ba22a5bc327a0cf5f8a2,PodSandboxId:8aac4234df2d12e07c37fb39a1595bd340e7adc1fe2162b211b453851a56a63d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd34
                                                
                                                6a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761990685537208680,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e328fd3e-a381-414d-ba99-1aa6f7f40585,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c0222f1b7214ab99931e32355894f2f03f8261792abe4a4d2bb34fcd2969f,PodSandboxId:1c7e949564af5bc80420dc3808d3f2087aa2f9b293627ed59b78902667c1bcef,Metadata:&ContainerMetadata{Name
                                                
                                                :amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761990655935157577,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lr4lw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee1e3ae-5d43-4b43-a348-0e04ec066093,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de230bb7ebf7771aad2c97275ddf43f297877d5aa072670a0a1ea8eb9c2d60f,PodSandboxId:4fbf69bbad2cf19e93c7344344fcc06babe9936500aa5bef352fd41fd
                                                
                                                55b694f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761990655486179158,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c394064-33ff-4fd0-a4bc-afb948952ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a27cff89c3381e7c393bcaa8835fcfc8181cd61e7b6ab7d332528c6747943387,PodSandboxId:d7fa84c405309fb1e772e6c659810175defff8a22e42a89197e6b5a5597a8c84,Meta
                                                
                                                data:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761990646997219064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vsbrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a65dae-82f4-4f33-b460-fa45a39b3342,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
                                                
                                                Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260edbddb00ef801ed1131f918ebc64902a7e77ccf06a4ed7c432254423d7b66,PodSandboxId:089a55380f09729b05eee5a252927b0c79db01dc718d6007a08b5689f2ce71c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761990646303679370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7fck9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a834adcc-b0ec-4cad-8944-bea90a627787,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.
                                                
                                                terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86586375e770d45d0d93bbb47a93539a88b9dc7cdd8db120d1c9301cf9724986,PodSandboxId:47c204cffec810f2b063e0da736cf9f9a808714639f57abfa3a16da3187f96a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761990633442334233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64ac66b49c7412b8fa37d2ea6025670,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"con
                                                
                                                tainerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195a44f107dbd39942c739e79c57d0cb5365ba4acc9d6617ae02ff7fabb66667,PodSandboxId:0780152663a4bf99a793fec09c7dd2ddf6dc4673b89381ad0a9d3bb4248095e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761990633398671407,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c80f54e7a2ffeed9d816c83a1643dee4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.ku
                                                
                                                bernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a6a05d5c3b322ab0daa8e0142efedb8b2cd9709809a366e3b02c33252f097e2,PodSandboxId:4303a653e0e77a28ad08502f1313df5bfebd24a17c8e4816f84db5f2d930a571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761990633395979421,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-086339,io.kubernetes.pod.namespace: kube-system,i
                                                
                                                o.kubernetes.pod.uid: 6ff8e16ad24795a1ca532e7aa16809a1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c9ad62c824f0689e272e9d02d572351e59f9325259ac81e62d0597c48762a5,PodSandboxId:25028e524345d4f110f0887066fc1114742e907055b01a9fcf2cb85f6e770b0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761990633414977195,Labels:map[string]string{io.kubernetes.container.name: kube
                                                
                                                -apiserver,io.kubernetes.pod.name: kube-apiserver-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b611a3c7c50f2133aad0ea70b2107,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a43b13bd-cd83-453e-bfec-194e48df3256 name=/runtime.v1.RuntimeService/ListContainers
                                                
                                                	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.542574381Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=daccb402-3088-48ed-997c-96cb295b80e1 name=/runtime.v1.RuntimeService/Version
                                                
                                                	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.542667879Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=daccb402-3088-48ed-997c-96cb295b80e1 name=/runtime.v1.RuntimeService/Version
                                                
                                                	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.543918251Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=582e17ec-5d7c-43b1-a8b9-c5844ca93000 name=/runtime.v1.ImageService/ImageFsInfo
                                                
                                                	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.545112251Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991272545079285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:511388,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=582e17ec-5d7c-43b1-a8b9-c5844ca93000 name=/runtime.v1.ImageService/ImageFsInfo
                                                
                                                	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.545761692Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f674aaa-d446-48b1-824e-331da7751d60 name=/runtime.v1.RuntimeService/ListContainers
                                                
                                                Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.546023274Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f674aaa-d446-48b1-824e-331da7751d60 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.546391593Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8f9ab035f10b883f89c331d67218f109856992b9b069efdae0a16a908bf656d,PodSandboxId:ecbb6e0269dbe5206ee40e41cf202e8a0f1fc8985220bca67dd2abcee664753f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761990753121450389,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bd0f0b90-ebd1-434e-86db-7717f59bb0b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
                                                
                                                minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60f64f1e1264248ca86d3e8ea17c90635c9d479311fe8d5ea622b661f0068bd6,PodSandboxId:b2e63f129e7cad5f03427260dc3589db4cecd4b45329bdb1e1023738a84b3985,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761990727410551183,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-g7dks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4165ee4-5d09-49d4-a0c1-f663b2084a0d,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.po
                                                
                                                rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4b410307ca2339c52c3d12763e6b754600ea116c26c1df56bd5b04a1a68661d,PodSandboxId:48e637e86e4493f303489a52457e2b59ba63b33cc608f38bb21f8e651a9e1571,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a
                                                
                                                7bf2,State:CONTAINER_EXITED,CreatedAt:1761990712169152493,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dw6sn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51ccb987-b8f5-42f1-af70-9d22dd5ca2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764b375ef3791c087e6ad4b248126f0c7c98e6065f6bd3c282044dcc212ac1f4,PodSandboxId:1c83f726dda755d3ed283799c973eeabdf1da173f6f6ce420a3d047efb307a42,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba
                                                
                                                112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990709662174283,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d7qkm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0111b6b5-409d-4b18-a391-db0a0fbe7882,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccff636c81da778123aaba73ca1c6a96114c3d9b455724fc184ea7051b61a16,PodSandboxId:ae1c1b106a1ce6fe7752079dd99dd3da08ea5c8417f73c7d2db66281343dd8bc,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
                                                
                                                eRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761990706331554116,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-p2brt,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 7d9684ff-4d35-4cab-b655-c3fcbbfaa552,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5d4912957560b7536a6c330e413b78d8074dab0b202ba22a5bc327a0cf5f8a2,PodSandboxId:8aac4234df2d12e07c37fb39a1595bd340e7adc1fe2162b211b453851a56a63d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd34
                                                
                                                6a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761990685537208680,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e328fd3e-a381-414d-ba99-1aa6f7f40585,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c0222f1b7214ab99931e32355894f2f03f8261792abe4a4d2bb34fcd2969f,PodSandboxId:1c7e949564af5bc80420dc3808d3f2087aa2f9b293627ed59b78902667c1bcef,Metadata:&ContainerMetadata{Name
                                                
                                                :amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761990655935157577,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lr4lw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee1e3ae-5d43-4b43-a348-0e04ec066093,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de230bb7ebf7771aad2c97275ddf43f297877d5aa072670a0a1ea8eb9c2d60f,PodSandboxId:4fbf69bbad2cf19e93c7344344fcc06babe9936500aa5bef352fd41fd
                                                
                                                55b694f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761990655486179158,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c394064-33ff-4fd0-a4bc-afb948952ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a27cff89c3381e7c393bcaa8835fcfc8181cd61e7b6ab7d332528c6747943387,PodSandboxId:d7fa84c405309fb1e772e6c659810175defff8a22e42a89197e6b5a5597a8c84,Meta
                                                
                                                data:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761990646997219064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vsbrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a65dae-82f4-4f33-b460-fa45a39b3342,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
                                                
                                                Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260edbddb00ef801ed1131f918ebc64902a7e77ccf06a4ed7c432254423d7b66,PodSandboxId:089a55380f09729b05eee5a252927b0c79db01dc718d6007a08b5689f2ce71c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761990646303679370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7fck9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a834adcc-b0ec-4cad-8944-bea90a627787,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.
                                                
                                                terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86586375e770d45d0d93bbb47a93539a88b9dc7cdd8db120d1c9301cf9724986,PodSandboxId:47c204cffec810f2b063e0da736cf9f9a808714639f57abfa3a16da3187f96a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761990633442334233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64ac66b49c7412b8fa37d2ea6025670,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"con
                                                
                                                tainerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195a44f107dbd39942c739e79c57d0cb5365ba4acc9d6617ae02ff7fabb66667,PodSandboxId:0780152663a4bf99a793fec09c7dd2ddf6dc4673b89381ad0a9d3bb4248095e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761990633398671407,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c80f54e7a2ffeed9d816c83a1643dee4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.ku
                                                
                                                bernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a6a05d5c3b322ab0daa8e0142efedb8b2cd9709809a366e3b02c33252f097e2,PodSandboxId:4303a653e0e77a28ad08502f1313df5bfebd24a17c8e4816f84db5f2d930a571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761990633395979421,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-086339,io.kubernetes.pod.namespace: kube-system,i
                                                
                                                o.kubernetes.pod.uid: 6ff8e16ad24795a1ca532e7aa16809a1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c9ad62c824f0689e272e9d02d572351e59f9325259ac81e62d0597c48762a5,PodSandboxId:25028e524345d4f110f0887066fc1114742e907055b01a9fcf2cb85f6e770b0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761990633414977195,Labels:map[string]string{io.kubernetes.container.name: kube
                                                
                                                -apiserver,io.kubernetes.pod.name: kube-apiserver-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b611a3c7c50f2133aad0ea70b2107,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f674aaa-d446-48b1-824e-331da7751d60 name=/runtime.v1.RuntimeService/ListContainers
                                                
                                                	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.584610395Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8adb3d4a-532d-437f-8010-bd862eb9bb16 name=/runtime.v1.RuntimeService/Version
                                                
                                                	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.584744468Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8adb3d4a-532d-437f-8010-bd862eb9bb16 name=/runtime.v1.RuntimeService/Version
                                                
                                                	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.586071799Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a8ad1f8-747b-495f-8cab-a8953c5338a1 name=/runtime.v1.ImageService/ImageFsInfo
                                                
                                                	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.587449550Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991272587421814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:511388,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a8ad1f8-747b-495f-8cab-a8953c5338a1 name=/runtime.v1.ImageService/ImageFsInfo
                                                
                                                	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.588396765Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41864b4a-9076-48c6-89c0-760957b1a65d name=/runtime.v1.RuntimeService/ListContainers
                                                
                                                Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.588488560Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41864b4a-9076-48c6-89c0-760957b1a65d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.588929638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8f9ab035f10b883f89c331d67218f109856992b9b069efdae0a16a908bf656d,PodSandboxId:ecbb6e0269dbe5206ee40e41cf202e8a0f1fc8985220bca67dd2abcee664753f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761990753121450389,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bd0f0b90-ebd1-434e-86db-7717f59bb0b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
                                                
                                                minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60f64f1e1264248ca86d3e8ea17c90635c9d479311fe8d5ea622b661f0068bd6,PodSandboxId:b2e63f129e7cad5f03427260dc3589db4cecd4b45329bdb1e1023738a84b3985,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761990727410551183,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-g7dks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4165ee4-5d09-49d4-a0c1-f663b2084a0d,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.po
                                                
                                                rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4b410307ca2339c52c3d12763e6b754600ea116c26c1df56bd5b04a1a68661d,PodSandboxId:48e637e86e4493f303489a52457e2b59ba63b33cc608f38bb21f8e651a9e1571,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a
                                                
                                                7bf2,State:CONTAINER_EXITED,CreatedAt:1761990712169152493,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dw6sn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51ccb987-b8f5-42f1-af70-9d22dd5ca2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764b375ef3791c087e6ad4b248126f0c7c98e6065f6bd3c282044dcc212ac1f4,PodSandboxId:1c83f726dda755d3ed283799c973eeabdf1da173f6f6ce420a3d047efb307a42,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba
                                                
                                                112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990709662174283,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d7qkm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0111b6b5-409d-4b18-a391-db0a0fbe7882,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccff636c81da778123aaba73ca1c6a96114c3d9b455724fc184ea7051b61a16,PodSandboxId:ae1c1b106a1ce6fe7752079dd99dd3da08ea5c8417f73c7d2db66281343dd8bc,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
                                                
                                                eRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761990706331554116,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-p2brt,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 7d9684ff-4d35-4cab-b655-c3fcbbfaa552,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5d4912957560b7536a6c330e413b78d8074dab0b202ba22a5bc327a0cf5f8a2,PodSandboxId:8aac4234df2d12e07c37fb39a1595bd340e7adc1fe2162b211b453851a56a63d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd34
                                                
                                                6a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761990685537208680,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e328fd3e-a381-414d-ba99-1aa6f7f40585,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c0222f1b7214ab99931e32355894f2f03f8261792abe4a4d2bb34fcd2969f,PodSandboxId:1c7e949564af5bc80420dc3808d3f2087aa2f9b293627ed59b78902667c1bcef,Metadata:&ContainerMetadata{Name
                                                
                                                :amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761990655935157577,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lr4lw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee1e3ae-5d43-4b43-a348-0e04ec066093,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de230bb7ebf7771aad2c97275ddf43f297877d5aa072670a0a1ea8eb9c2d60f,PodSandboxId:4fbf69bbad2cf19e93c7344344fcc06babe9936500aa5bef352fd41fd
                                                
                                                55b694f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761990655486179158,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c394064-33ff-4fd0-a4bc-afb948952ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a27cff89c3381e7c393bcaa8835fcfc8181cd61e7b6ab7d332528c6747943387,PodSandboxId:d7fa84c405309fb1e772e6c659810175defff8a22e42a89197e6b5a5597a8c84,Meta
                                                
                                                data:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761990646997219064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vsbrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a65dae-82f4-4f33-b460-fa45a39b3342,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
                                                
                                                Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260edbddb00ef801ed1131f918ebc64902a7e77ccf06a4ed7c432254423d7b66,PodSandboxId:089a55380f09729b05eee5a252927b0c79db01dc718d6007a08b5689f2ce71c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761990646303679370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7fck9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a834adcc-b0ec-4cad-8944-bea90a627787,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.
                                                
                                                terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86586375e770d45d0d93bbb47a93539a88b9dc7cdd8db120d1c9301cf9724986,PodSandboxId:47c204cffec810f2b063e0da736cf9f9a808714639f57abfa3a16da3187f96a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761990633442334233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64ac66b49c7412b8fa37d2ea6025670,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"con
                                                
                                                tainerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195a44f107dbd39942c739e79c57d0cb5365ba4acc9d6617ae02ff7fabb66667,PodSandboxId:0780152663a4bf99a793fec09c7dd2ddf6dc4673b89381ad0a9d3bb4248095e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761990633398671407,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c80f54e7a2ffeed9d816c83a1643dee4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.ku
                                                
                                                bernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a6a05d5c3b322ab0daa8e0142efedb8b2cd9709809a366e3b02c33252f097e2,PodSandboxId:4303a653e0e77a28ad08502f1313df5bfebd24a17c8e4816f84db5f2d930a571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761990633395979421,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-086339,io.kubernetes.pod.namespace: kube-system,i
                                                
                                                o.kubernetes.pod.uid: 6ff8e16ad24795a1ca532e7aa16809a1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c9ad62c824f0689e272e9d02d572351e59f9325259ac81e62d0597c48762a5,PodSandboxId:25028e524345d4f110f0887066fc1114742e907055b01a9fcf2cb85f6e770b0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761990633414977195,Labels:map[string]string{io.kubernetes.container.name: kube
                                                
                                                -apiserver,io.kubernetes.pod.name: kube-apiserver-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b611a3c7c50f2133aad0ea70b2107,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41864b4a-9076-48c6-89c0-760957b1a65d name=/runtime.v1.RuntimeService/ListContainers
                                                
                                                	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.633391948Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d464116-3916-4238-bc93-98f0081cfb4b name=/runtime.v1.RuntimeService/Version
                                                
                                                	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.633484670Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d464116-3916-4238-bc93-98f0081cfb4b name=/runtime.v1.RuntimeService/Version
                                                
                                                	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.635335971Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d10373c3-1aec-446d-93a9-5052887d0261 name=/runtime.v1.ImageService/ImageFsInfo
                                                
                                                	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.637233621Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991272637199543,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:511388,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d10373c3-1aec-446d-93a9-5052887d0261 name=/runtime.v1.ImageService/ImageFsInfo
                                                
                                                	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.638095967Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b8f6eb1-b2c4-4f59-b21b-2e0cdd5f9958 name=/runtime.v1.RuntimeService/ListContainers
                                                
                                                Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.638184482Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b8f6eb1-b2c4-4f59-b21b-2e0cdd5f9958 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.638480061Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8f9ab035f10b883f89c331d67218f109856992b9b069efdae0a16a908bf656d,PodSandboxId:ecbb6e0269dbe5206ee40e41cf202e8a0f1fc8985220bca67dd2abcee664753f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761990753121450389,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bd0f0b90-ebd1-434e-86db-7717f59bb0b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
                                                
                                                minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60f64f1e1264248ca86d3e8ea17c90635c9d479311fe8d5ea622b661f0068bd6,PodSandboxId:b2e63f129e7cad5f03427260dc3589db4cecd4b45329bdb1e1023738a84b3985,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761990727410551183,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-g7dks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4165ee4-5d09-49d4-a0c1-f663b2084a0d,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.po
                                                
                                                rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4b410307ca2339c52c3d12763e6b754600ea116c26c1df56bd5b04a1a68661d,PodSandboxId:48e637e86e4493f303489a52457e2b59ba63b33cc608f38bb21f8e651a9e1571,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a
                                                
                                                7bf2,State:CONTAINER_EXITED,CreatedAt:1761990712169152493,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dw6sn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51ccb987-b8f5-42f1-af70-9d22dd5ca2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764b375ef3791c087e6ad4b248126f0c7c98e6065f6bd3c282044dcc212ac1f4,PodSandboxId:1c83f726dda755d3ed283799c973eeabdf1da173f6f6ce420a3d047efb307a42,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba
                                                
                                                112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990709662174283,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d7qkm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0111b6b5-409d-4b18-a391-db0a0fbe7882,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccff636c81da778123aaba73ca1c6a96114c3d9b455724fc184ea7051b61a16,PodSandboxId:ae1c1b106a1ce6fe7752079dd99dd3da08ea5c8417f73c7d2db66281343dd8bc,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
                                                
                                                eRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761990706331554116,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-p2brt,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 7d9684ff-4d35-4cab-b655-c3fcbbfaa552,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5d4912957560b7536a6c330e413b78d8074dab0b202ba22a5bc327a0cf5f8a2,PodSandboxId:8aac4234df2d12e07c37fb39a1595bd340e7adc1fe2162b211b453851a56a63d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd34
                                                
                                                6a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761990685537208680,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e328fd3e-a381-414d-ba99-1aa6f7f40585,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c0222f1b7214ab99931e32355894f2f03f8261792abe4a4d2bb34fcd2969f,PodSandboxId:1c7e949564af5bc80420dc3808d3f2087aa2f9b293627ed59b78902667c1bcef,Metadata:&ContainerMetadata{Name
                                                
                                                :amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761990655935157577,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lr4lw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee1e3ae-5d43-4b43-a348-0e04ec066093,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de230bb7ebf7771aad2c97275ddf43f297877d5aa072670a0a1ea8eb9c2d60f,PodSandboxId:4fbf69bbad2cf19e93c7344344fcc06babe9936500aa5bef352fd41fd
                                                
                                                55b694f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761990655486179158,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c394064-33ff-4fd0-a4bc-afb948952ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a27cff89c3381e7c393bcaa8835fcfc8181cd61e7b6ab7d332528c6747943387,PodSandboxId:d7fa84c405309fb1e772e6c659810175defff8a22e42a89197e6b5a5597a8c84,Meta
                                                
                                                data:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761990646997219064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vsbrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a65dae-82f4-4f33-b460-fa45a39b3342,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
                                                
                                                Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260edbddb00ef801ed1131f918ebc64902a7e77ccf06a4ed7c432254423d7b66,PodSandboxId:089a55380f09729b05eee5a252927b0c79db01dc718d6007a08b5689f2ce71c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761990646303679370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7fck9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a834adcc-b0ec-4cad-8944-bea90a627787,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.
                                                
                                                terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86586375e770d45d0d93bbb47a93539a88b9dc7cdd8db120d1c9301cf9724986,PodSandboxId:47c204cffec810f2b063e0da736cf9f9a808714639f57abfa3a16da3187f96a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761990633442334233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64ac66b49c7412b8fa37d2ea6025670,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"con
                                                
                                                tainerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195a44f107dbd39942c739e79c57d0cb5365ba4acc9d6617ae02ff7fabb66667,PodSandboxId:0780152663a4bf99a793fec09c7dd2ddf6dc4673b89381ad0a9d3bb4248095e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761990633398671407,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c80f54e7a2ffeed9d816c83a1643dee4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.ku
                                                
                                                bernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a6a05d5c3b322ab0daa8e0142efedb8b2cd9709809a366e3b02c33252f097e2,PodSandboxId:4303a653e0e77a28ad08502f1313df5bfebd24a17c8e4816f84db5f2d930a571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761990633395979421,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-086339,io.kubernetes.pod.namespace: kube-system,i
                                                
                                                o.kubernetes.pod.uid: 6ff8e16ad24795a1ca532e7aa16809a1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c9ad62c824f0689e272e9d02d572351e59f9325259ac81e62d0597c48762a5,PodSandboxId:25028e524345d4f110f0887066fc1114742e907055b01a9fcf2cb85f6e770b0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761990633414977195,Labels:map[string]string{io.kubernetes.container.name: kube
                                                
                                                -apiserver,io.kubernetes.pod.name: kube-apiserver-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b611a3c7c50f2133aad0ea70b2107,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b8f6eb1-b2c4-4f59-b21b-2e0cdd5f9958 name=/runtime.v1.RuntimeService/ListContainers
                                                
                                                ==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
d8f9ab035f10b gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 8 minutes ago Running busybox 0 ecbb6e0269dbe busybox
60f64f1e12642 registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd 9 minutes ago Running controller 0 b2e63f129e7ca ingress-nginx-controller-675c5ddd98-g7dks
a4b410307ca23 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39 9 minutes ago Exited patch 0 48e637e86e449 ingress-nginx-admission-patch-dw6sn
764b375ef3791 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39 9 minutes ago Exited create 0 1c83f726dda75 ingress-nginx-admission-create-d7qkm
6ccff636c81da ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb 9 minutes ago Running gadget 0 ae1c1b106a1ce gadget-p2brt
e5d4912957560 docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 9 minutes ago Running minikube-ingress-dns 0 8aac4234df2d1 kube-ingress-dns-minikube
323c0222f1b72 docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 10 minutes ago Running amd-gpu-device-plugin 0 1c7e949564af5 amd-gpu-device-plugin-lr4lw
6de230bb7ebf7 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 10 minutes ago Running storage-provisioner 0 4fbf69bbad2cf storage-provisioner
a27cff89c3381 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 10 minutes ago Running coredns 0 d7fa84c405309 coredns-66bc5c9577-vsbrs
260edbddb00ef fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7 10 minutes ago Running kube-proxy 0 089a55380f097 kube-proxy-7fck9
86586375e770d 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813 10 minutes ago Running kube-scheduler 0 47c204cffec81 kube-scheduler-addons-086339
e1c9ad62c824f c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97 10 minutes ago Running kube-apiserver 0 25028e524345d kube-apiserver-addons-086339
195a44f107dbd 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115 10 minutes ago Running etcd 0 0780152663a4b etcd-addons-086339
9a6a05d5c3b32 c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f 10 minutes ago Running kube-controller-manager 0 4303a653e0e77 kube-controller-manager-addons-086339
==> coredns [a27cff89c3381e7c393bcaa8835fcfc8181cd61e7b6ab7d332528c6747943387] <==
[INFO] 10.244.0.8:46984 - 64533 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000141653s
[INFO] 10.244.0.8:46984 - 26572 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000122796s
[INFO] 10.244.0.8:46984 - 13929 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000122328s
[INFO] 10.244.0.8:46984 - 50125 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000111517s
[INFO] 10.244.0.8:46984 - 28460 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000076823s
[INFO] 10.244.0.8:46984 - 37293 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000357436s
[INFO] 10.244.0.8:46984 - 35576 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000074841s
[INFO] 10.244.0.8:47197 - 56588 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000121682s
[INFO] 10.244.0.8:47197 - 56863 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000074546s
[INFO] 10.244.0.8:55042 - 52218 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00018264s
[INFO] 10.244.0.8:55042 - 52511 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079606s
[INFO] 10.244.0.8:46708 - 46443 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000066375s
[INFO] 10.244.0.8:46708 - 46765 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000066983s
[INFO] 10.244.0.8:59900 - 32652 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000207279s
[INFO] 10.244.0.8:59900 - 32872 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000078309s
[INFO] 10.244.0.23:50316 - 52228 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001915683s
[INFO] 10.244.0.23:47612 - 63606 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002354882s
[INFO] 10.244.0.23:53727 - 34179 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000138277s
[INFO] 10.244.0.23:43312 - 5456 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000125706s
[INFO] 10.244.0.23:34742 - 50233 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000105505s
[INFO] 10.244.0.23:42706 - 32458 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000148964s
[INFO] 10.244.0.23:47433 - 16041 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00404755s
[INFO] 10.244.0.23:43796 - 36348 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.003930977s
[INFO] 10.244.0.28:59610 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000657818s
[INFO] 10.244.0.28:58478 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000385159s
==> describe nodes <==
Name: addons-086339
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-086339
kubernetes.io/os=linux
minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
minikube.k8s.io/name=addons-086339
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_11_01T09_50_40_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-086339
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 01 Nov 2025 09:50:37 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-086339
AcquireTime: <unset>
RenewTime: Sat, 01 Nov 2025 10:01:03 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 01 Nov 2025 09:54:15 +0000 Sat, 01 Nov 2025 09:50:33 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 01 Nov 2025 09:54:15 +0000 Sat, 01 Nov 2025 09:50:33 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 01 Nov 2025 09:54:15 +0000 Sat, 01 Nov 2025 09:50:33 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 01 Nov 2025 09:54:15 +0000 Sat, 01 Nov 2025 09:50:40 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.58
Hostname: addons-086339
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
System Info:
Machine ID: a0be334a213a4e9abad36168cb6c4d93
System UUID: a0be334a-213a-4e9a-bad3-6168cb6c4d93
Boot ID: f5f61220-a436-4e42-9f0c-21fc51d403ab
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (15 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m43s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m1s
default task-pv-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m57s
default test-local-path 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m24s
gadget gadget-p2brt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
ingress-nginx ingress-nginx-controller-675c5ddd98-g7dks 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 10m
kube-system amd-gpu-device-plugin-lr4lw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system coredns-66bc5c9577-vsbrs 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 10m
kube-system etcd-addons-086339 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 10m
kube-system kube-apiserver-addons-086339 250m (12%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system kube-controller-manager-addons-086339 200m (10%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system kube-proxy-7fck9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system kube-scheduler-addons-086339 100m (5%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 10m kube-proxy
Normal Starting 10m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 10m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 10m kubelet Node addons-086339 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 10m kubelet Node addons-086339 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 10m kubelet Node addons-086339 status is now: NodeHasSufficientPID
Normal NodeReady 10m kubelet Node addons-086339 status is now: NodeReady
Normal RegisteredNode 10m node-controller Node addons-086339 event: Registered Node addons-086339 in Controller
==> dmesg <==
[ +0.026933] kauditd_printk_skb: 18 callbacks suppressed
[ +0.422693] kauditd_printk_skb: 282 callbacks suppressed
[ +0.000178] kauditd_printk_skb: 179 callbacks suppressed
[Nov 1 09:51] kauditd_printk_skb: 480 callbacks suppressed
[ +10.588247] kauditd_printk_skb: 85 callbacks suppressed
[ +8.893680] kauditd_printk_skb: 32 callbacks suppressed
[ +4.164899] kauditd_printk_skb: 11 callbacks suppressed
[ +11.079506] kauditd_printk_skb: 41 callbacks suppressed
[ +5.550370] kauditd_printk_skb: 17 callbacks suppressed
[ +5.067618] kauditd_printk_skb: 131 callbacks suppressed
[ +2.164833] kauditd_printk_skb: 126 callbacks suppressed
[Nov 1 09:52] kauditd_printk_skb: 130 callbacks suppressed
[ +6.663248] kauditd_printk_skb: 68 callbacks suppressed
[ +6.258025] kauditd_printk_skb: 26 callbacks suppressed
[ +0.000041] kauditd_printk_skb: 2 callbacks suppressed
[ +13.077918] kauditd_printk_skb: 41 callbacks suppressed
[ +0.000038] kauditd_printk_skb: 22 callbacks suppressed
[ +0.048376] kauditd_printk_skb: 98 callbacks suppressed
[ +0.000043] kauditd_printk_skb: 78 callbacks suppressed
[Nov 1 09:53] kauditd_printk_skb: 58 callbacks suppressed
[ +4.089930] kauditd_printk_skb: 42 callbacks suppressed
[ +31.556122] kauditd_printk_skb: 74 callbacks suppressed
[Nov 1 09:54] kauditd_printk_skb: 80 callbacks suppressed
[ +15.872282] kauditd_printk_skb: 22 callbacks suppressed
[Nov 1 09:59] kauditd_printk_skb: 10 callbacks suppressed
==> etcd [195a44f107dbd39942c739e79c57d0cb5365ba4acc9d6617ae02ff7fabb66667] <==
	{"level":"warn","ts":"2025-11-01T09:51:53.829077Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T09:51:53.520485Z","time spent":"307.914654ms","remote":"127.0.0.1:50442","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4224,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" mod_revision:715 > success:<request_put:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" value_size:4158 >> failure:<request_range:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" > >"}
                                                
                                                	{"level":"warn","ts":"2025-11-01T09:51:53.837101Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"205.85086ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
                                                
                                                	{"level":"info","ts":"2025-11-01T09:51:53.837158Z","caller":"traceutil/trace.go:172","msg":"trace[1726047932] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1054; }","duration":"205.918617ms","start":"2025-11-01T09:51:53.631230Z","end":"2025-11-01T09:51:53.837149Z","steps":["trace[1726047932] 'agreement among raft nodes before linearized reading'  (duration: 205.832252ms)"],"step_count":1}
                                                
                                                	{"level":"warn","ts":"2025-11-01T09:51:53.837332Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.114488ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
                                                
                                                	{"level":"info","ts":"2025-11-01T09:51:53.837352Z","caller":"traceutil/trace.go:172","msg":"trace[1767754287] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1054; }","duration":"160.137708ms","start":"2025-11-01T09:51:53.677208Z","end":"2025-11-01T09:51:53.837346Z","steps":["trace[1767754287] 'agreement among raft nodes before linearized reading'  (duration: 160.097095ms)"],"step_count":1}
                                                
                                                	{"level":"info","ts":"2025-11-01T09:51:53.837427Z","caller":"traceutil/trace.go:172","msg":"trace[169582400] transaction","detail":"{read_only:false; response_revision:1055; number_of_response:1; }","duration":"313.012286ms","start":"2025-11-01T09:51:53.524403Z","end":"2025-11-01T09:51:53.837415Z","steps":["trace[169582400] 'process raft request'  (duration: 312.936714ms)"],"step_count":1}
                                                
                                                	{"level":"warn","ts":"2025-11-01T09:51:53.837521Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T09:51:53.524385Z","time spent":"313.094727ms","remote":"127.0.0.1:50348","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4615,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-dw6sn\" mod_revision:1047 > success:<request_put:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-dw6sn\" value_size:4543 >> failure:<request_range:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-dw6sn\" > >"}
                                                
                                                	{"level":"warn","ts":"2025-11-01T09:51:53.837540Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"187.263588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
                                                
                                                	{"level":"info","ts":"2025-11-01T09:51:53.837560Z","caller":"traceutil/trace.go:172","msg":"trace[1222634] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1055; }","duration":"187.33ms","start":"2025-11-01T09:51:53.650224Z","end":"2025-11-01T09:51:53.837554Z","steps":["trace[1222634] 'agreement among raft nodes before linearized reading'  (duration: 187.245695ms)"],"step_count":1}
                                                
                                                	{"level":"warn","ts":"2025-11-01T09:51:57.997674Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.945423ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
                                                
                                                	{"level":"info","ts":"2025-11-01T09:51:57.998286Z","caller":"traceutil/trace.go:172","msg":"trace[902941296] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1089; }","duration":"106.560193ms","start":"2025-11-01T09:51:57.891708Z","end":"2025-11-01T09:51:57.998268Z","steps":["trace[902941296] 'range keys from in-memory index tree'  (duration: 105.862666ms)"],"step_count":1}
                                                
                                                	{"level":"info","ts":"2025-11-01T09:52:04.319796Z","caller":"traceutil/trace.go:172","msg":"trace[427956117] transaction","detail":"{read_only:false; response_revision:1125; number_of_response:1; }","duration":"140.175418ms","start":"2025-11-01T09:52:04.179583Z","end":"2025-11-01T09:52:04.319759Z","steps":["trace[427956117] 'process raft request'  (duration: 140.063245ms)"],"step_count":1}
                                                
                                                	{"level":"info","ts":"2025-11-01T09:52:08.551381Z","caller":"traceutil/trace.go:172","msg":"trace[603420838] transaction","detail":"{read_only:false; response_revision:1143; number_of_response:1; }","duration":"197.437726ms","start":"2025-11-01T09:52:08.353928Z","end":"2025-11-01T09:52:08.551366Z","steps":["trace[603420838] 'process raft request'  (duration: 197.339599ms)"],"step_count":1}
                                                
                                                	{"level":"warn","ts":"2025-11-01T09:52:12.328289Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.65917ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
                                                
                                                	{"level":"info","ts":"2025-11-01T09:52:12.328359Z","caller":"traceutil/trace.go:172","msg":"trace[1819451364] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1156; }","duration":"151.738106ms","start":"2025-11-01T09:52:12.176611Z","end":"2025-11-01T09:52:12.328349Z","steps":["trace[1819451364] 'range keys from in-memory index tree'  (duration: 151.603213ms)"],"step_count":1}
                                                
                                                	{"level":"info","ts":"2025-11-01T09:52:19.593365Z","caller":"traceutil/trace.go:172","msg":"trace[1734006161] transaction","detail":"{read_only:false; response_revision:1195; number_of_response:1; }","duration":"230.197039ms","start":"2025-11-01T09:52:19.363155Z","end":"2025-11-01T09:52:19.593352Z","steps":["trace[1734006161] 'process raft request'  (duration: 230.054763ms)"],"step_count":1}
                                                
                                                	{"level":"info","ts":"2025-11-01T09:53:03.073159Z","caller":"traceutil/trace.go:172","msg":"trace[844100605] linearizableReadLoop","detail":"{readStateIndex:1471; appliedIndex:1471; }","duration":"184.287063ms","start":"2025-11-01T09:53:02.888805Z","end":"2025-11-01T09:53:03.073092Z","steps":["trace[844100605] 'read index received'  (duration: 184.274805ms)","trace[844100605] 'applied index is now lower than readState.Index'  (duration: 11.185µs)"],"step_count":2}
                                                
                                                	{"level":"warn","ts":"2025-11-01T09:53:03.073336Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.514416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
                                                
                                                	{"level":"info","ts":"2025-11-01T09:53:03.073356Z","caller":"traceutil/trace.go:172","msg":"trace[379602539] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1424; }","duration":"184.548883ms","start":"2025-11-01T09:53:02.888802Z","end":"2025-11-01T09:53:03.073351Z","steps":["trace[379602539] 'agreement among raft nodes before linearized reading'  (duration: 184.47499ms)"],"step_count":1}
                                                
                                                	{"level":"warn","ts":"2025-11-01T09:53:03.073440Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"173.732425ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
                                                
                                                	{"level":"info","ts":"2025-11-01T09:53:03.073464Z","caller":"traceutil/trace.go:172","msg":"trace[1841159583] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1424; }","duration":"173.762443ms","start":"2025-11-01T09:53:02.899696Z","end":"2025-11-01T09:53:03.073458Z","steps":["trace[1841159583] 'agreement among raft nodes before linearized reading'  (duration: 173.676648ms)"],"step_count":1}
                                                
                                                	{"level":"info","ts":"2025-11-01T09:53:03.073212Z","caller":"traceutil/trace.go:172","msg":"trace[990398784] transaction","detail":"{read_only:false; response_revision:1424; number_of_response:1; }","duration":"298.156963ms","start":"2025-11-01T09:53:02.775044Z","end":"2025-11-01T09:53:03.073201Z","steps":["trace[990398784] 'process raft request'  (duration: 298.073448ms)"],"step_count":1}
                                                
                                                	{"level":"info","ts":"2025-11-01T10:00:35.435145Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1806}
                                                
                                                	{"level":"info","ts":"2025-11-01T10:00:35.507318Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1806,"took":"68.899276ms","hash":945816022,"current-db-size-bytes":6217728,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":4050944,"current-db-size-in-use":"4.1 MB"}
                                                
                                                	{"level":"info","ts":"2025-11-01T10:00:35.507380Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":945816022,"revision":1806,"compact-revision":-1}
                                                
                                                ==> kernel <==
10:01:13 up 11 min, 0 users, load average: 0.64, 0.66, 0.55
Linux addons-086339 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [e1c9ad62c824f0689e272e9d02d572351e59f9325259ac81e62d0597c48762a5] <==
Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
E1101 09:51:45.526596 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.150.255:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.150.255:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.150.255:443: connect: connection refused" logger="UnhandledError"
E1101 09:51:45.531959 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.150.255:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.150.255:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.150.255:443: connect: connection refused" logger="UnhandledError"
I1101 09:51:45.647009 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1101 09:52:39.519537 1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:41530: use of closed network connection
	I1101 09:52:48.989373       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.57.119"}
                                                
                                                I1101 09:53:11.180343 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1101 09:53:11.353371       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.7.153"}
                                                
                                                I1101 09:53:46.542354 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1101 09:59:18.596497 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1101 09:59:18.597023 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1101 09:59:18.641644 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1101 09:59:18.641705 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1101 09:59:18.642882 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1101 09:59:18.642938 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1101 09:59:18.667098 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1101 09:59:18.667271 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1101 09:59:18.699786 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1101 09:59:18.699898 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1101 09:59:19.643429 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1101 09:59:19.701335 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1101 09:59:19.721247 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I1101 10:00:37.179156 1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
==> kube-controller-manager [9a6a05d5c3b322ab0daa8e0142efedb8b2cd9709809a366e3b02c33252f097e2] <==
E1101 09:59:35.978218 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 09:59:35.979239 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 09:59:38.341797 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 09:59:38.343162 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 09:59:41.476578 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 09:59:41.477757 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
I1101 09:59:44.086990 1 reconciler.go:364] "attacherDetacher.AttachVolume started" logger="persistentvolume-attach-detach-controller" volumeName="kubernetes.io/csi/hostpath.csi.k8s.io^95d0b596-b708-11f0-979a-ce1acd12cba3" nodeName="addons-086339" scheduledPods=["default/task-pv-pod"]
I1101 09:59:44.352321 1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
I1101 09:59:44.352374 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1101 09:59:44.421472 1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
I1101 09:59:44.421649 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1101 09:59:59.548631 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 09:59:59.550334 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 10:00:00.894580 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 10:00:00.895667 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 10:00:01.800596 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 10:00:01.801785 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 10:00:29.194041 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 10:00:29.195428 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 10:00:38.263769 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 10:00:38.265085 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 10:00:46.598339 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 10:00:46.599423 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 10:00:59.811512 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 10:00:59.813647 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [260edbddb00ef801ed1131f918ebc64902a7e77ccf06a4ed7c432254423d7b66] <==
I1101 09:50:47.380388 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1101 09:50:47.481009 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1101 09:50:47.481962 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.58"]
E1101 09:50:47.483258 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1101 09:50:47.618974 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1101 09:50:47.619028 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1101 09:50:47.619055 1 server_linux.go:132] "Using iptables Proxier"
I1101 09:50:47.646432 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1101 09:50:47.648118 1 server.go:527] "Version info" version="v1.34.1"
I1101 09:50:47.648153 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1101 09:50:47.664129 1 config.go:309] "Starting node config controller"
I1101 09:50:47.666955 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1101 09:50:47.666969 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1101 09:50:47.665033 1 config.go:200] "Starting service config controller"
I1101 09:50:47.666978 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1101 09:50:47.667949 1 config.go:106] "Starting endpoint slice config controller"
I1101 09:50:47.667987 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1101 09:50:47.668010 1 config.go:403] "Starting serviceCIDR config controller"
I1101 09:50:47.668021 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1101 09:50:47.767136 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1101 09:50:47.771739 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1101 09:50:47.772010 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
==> kube-scheduler [86586375e770d45d0d93bbb47a93539a88b9dc7cdd8db120d1c9301cf9724986] <==
E1101 09:50:37.221936 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1101 09:50:37.222056 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1101 09:50:37.222116 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1101 09:50:37.222130 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1101 09:50:37.225229 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1101 09:50:37.225317 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1101 09:50:37.225378 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1101 09:50:37.227418 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1101 09:50:37.227443 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1101 09:50:37.227647 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1101 09:50:37.227768 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1101 09:50:37.227996 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1101 09:50:38.054220 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1101 09:50:38.064603 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1101 09:50:38.082458 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1101 09:50:38.180400 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1101 09:50:38.210958 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1101 09:50:38.220410 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1101 09:50:38.222634 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1101 09:50:38.324209 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1101 09:50:38.347306 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1101 09:50:38.391541 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1101 09:50:38.445129 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1101 09:50:38.559973 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
I1101 09:50:41.263288 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
	Nov 01 10:00:20 addons-086339 kubelet[1515]: E1101 10:00:20.397760    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761991220397149426  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
                                                
                                                	Nov 01 10:00:20 addons-086339 kubelet[1515]: E1101 10:00:20.397784    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761991220397149426  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
                                                
                                                	Nov 01 10:00:30 addons-086339 kubelet[1515]: E1101 10:00:30.401057    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761991230400557931  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
                                                
                                                	Nov 01 10:00:30 addons-086339 kubelet[1515]: E1101 10:00:30.401083    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761991230400557931  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
                                                
                                                Nov 01 10:00:31 addons-086339 kubelet[1515]: E1101 10:00:31.027702 1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="80f28ba1-b1ac-4f7a-9a35-3fd834d8e54e"
Nov 01 10:00:32 addons-086339 kubelet[1515]: E1101 10:00:32.026691 1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="eb0ec6cf-d05a-4514-92a8-21a6ef18f433"
	Nov 01 10:00:40 addons-086339 kubelet[1515]: E1101 10:00:40.403919    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761991240403478698  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
                                                
                                                	Nov 01 10:00:40 addons-086339 kubelet[1515]: E1101 10:00:40.403948    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761991240403478698  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
                                                
                                                	Nov 01 10:00:41 addons-086339 kubelet[1515]: W1101 10:00:41.484334    1515 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
                                                
                                                Nov 01 10:00:42 addons-086339 kubelet[1515]: E1101 10:00:42.028037 1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="80f28ba1-b1ac-4f7a-9a35-3fd834d8e54e"
Nov 01 10:00:47 addons-086339 kubelet[1515]: E1101 10:00:47.026330 1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="eb0ec6cf-d05a-4514-92a8-21a6ef18f433"
	Nov 01 10:00:50 addons-086339 kubelet[1515]: E1101 10:00:50.407135    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761991250406393908  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
                                                
                                                	Nov 01 10:00:50 addons-086339 kubelet[1515]: E1101 10:00:50.407379    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761991250406393908  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
                                                
                                                Nov 01 10:00:52 addons-086339 kubelet[1515]: E1101 10:00:52.540918 1515 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
Nov 01 10:00:52 addons-086339 kubelet[1515]: E1101 10:00:52.540992 1515 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
Nov 01 10:00:52 addons-086339 kubelet[1515]: E1101 10:00:52.541273 1515 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod test-local-path_default(bb9a245d-f766-4ca6-8de9-96b056a9cab4): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
Nov 01 10:00:52 addons-086339 kubelet[1515]: E1101 10:00:52.541325 1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="bb9a245d-f766-4ca6-8de9-96b056a9cab4"
Nov 01 10:00:59 addons-086339 kubelet[1515]: E1101 10:00:59.026609 1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="eb0ec6cf-d05a-4514-92a8-21a6ef18f433"
	Nov 01 10:01:00 addons-086339 kubelet[1515]: E1101 10:01:00.412783    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761991260412229847  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
                                                
                                                	Nov 01 10:01:00 addons-086339 kubelet[1515]: E1101 10:01:00.412909    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761991260412229847  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
                                                
                                                Nov 01 10:01:06 addons-086339 kubelet[1515]: E1101 10:01:06.031030 1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="bb9a245d-f766-4ca6-8de9-96b056a9cab4"
Nov 01 10:01:07 addons-086339 kubelet[1515]: I1101 10:01:07.026949 1515 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-lr4lw" secret="" err="secret \"gcp-auth\" not found"
Nov 01 10:01:07 addons-086339 kubelet[1515]: I1101 10:01:07.027118 1515 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 10:01:10 addons-086339 kubelet[1515]: E1101 10:01:10.416134    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761991270415643523  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
                                                
                                                	Nov 01 10:01:10 addons-086339 kubelet[1515]: E1101 10:01:10.416433    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761991270415643523  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
                                                
                                                ==> storage-provisioner [6de230bb7ebf7771aad2c97275ddf43f297877d5aa072670a0a1ea8eb9c2d60f] <==
W1101 10:00:47.730533 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:00:49.734700 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:00:49.739899 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:00:51.744908 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:00:51.750173 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:00:53.755077 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:00:53.765866 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:00:55.770215 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:00:55.780157 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:00:57.784138 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:00:57.789302 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:00:59.794506 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:00:59.800166 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:01:01.804246 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:01:01.813039 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:01:03.817324 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:01:03.823977 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:01:05.827893 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:01:05.836170 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:01:07.839065 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:01:07.845342 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:01:09.849236 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:01:09.856680 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:01:11.863958 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:01:11.871501 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-086339 -n addons-086339
                                                
                                                helpers_test.go:269: (dbg) Run:  kubectl --context addons-086339 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
                                                
                                                helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-d7qkm ingress-nginx-admission-patch-dw6sn
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context addons-086339 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-d7qkm ingress-nginx-admission-patch-dw6sn
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-086339 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-d7qkm ingress-nginx-admission-patch-dw6sn: exit status 1 (92.976172ms)
-- stdout --
Name: nginx
Namespace: default
Priority: 0
Service Account: default
Node: addons-086339/192.168.39.58
Start Time: Sat, 01 Nov 2025 09:53:11 +0000
Labels: run=nginx
Annotations: <none>
Status: Pending
IP: 10.244.0.29
IPs:
IP: 10.244.0.29
Containers:
nginx:
Container ID:
Image: docker.io/nginx:alpine
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sggwf (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-sggwf:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m2s default-scheduler Successfully assigned default/nginx to addons-086339
Warning Failed 5m6s kubelet Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 102s (x3 over 6m55s) kubelet Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 102s (x4 over 6m55s) kubelet Error: ErrImagePull
Normal BackOff 31s (x11 over 6m54s) kubelet Back-off pulling image "docker.io/nginx:alpine"
Warning Failed 31s (x11 over 6m54s) kubelet Error: ImagePullBackOff
Normal Pulling 17s (x5 over 8m2s) kubelet Pulling image "docker.io/nginx:alpine"
Name: task-pv-pod
Namespace: default
Priority: 0
Service Account: default
Node: addons-086339/192.168.39.58
Start Time: Sat, 01 Nov 2025 09:53:15 +0000
Labels: app=task-pv-pod
Annotations: <none>
Status: Pending
IP: 10.244.0.30
IPs:
IP: 10.244.0.30
Containers:
task-pv-container:
Container ID:
Image: docker.io/nginx
Image ID:
Port: 80/TCP (http-server)
Host Port: 0/TCP (http-server)
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x27kl (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
task-pv-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hpvc
ReadOnly: false
kube-api-access-x27kl:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m58s default-scheduler Successfully assigned default/task-pv-pod to addons-086339
Warning Failed 2m45s kubelet Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal Pulling 119s (x4 over 7m57s) kubelet Pulling image "docker.io/nginx"
Warning Failed 71s (x3 over 6m23s) kubelet Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 71s (x4 over 6m23s) kubelet Error: ErrImagePull
Normal BackOff 14s (x9 over 6m23s) kubelet Back-off pulling image "docker.io/nginx"
Warning Failed 14s (x9 over 6m23s) kubelet Error: ImagePullBackOff
Name: test-local-path
Namespace: default
Priority: 0
Service Account: default
Node: addons-086339/192.168.39.58
Start Time: Sat, 01 Nov 2025 09:52:55 +0000
Labels: run=test-local-path
Annotations: <none>
Status: Pending
IP: 10.244.0.27
IPs:
IP: 10.244.0.27
Containers:
busybox:
Container ID:
Image: busybox:stable
Image ID:
Port: <none>
Host Port: <none>
Command:
sh
-c
echo 'local-path-provisioner' > /test/file1
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/test from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t5c9x (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: test-pvc
ReadOnly: false
kube-api-access-t5c9x:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m18s default-scheduler Successfully assigned default/test-local-path to addons-086339
Warning Failed 2m14s (x3 over 5m52s) kubelet Failed to pull image "busybox:stable": reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal Pulling 52s (x5 over 8m15s) kubelet Pulling image "busybox:stable"
Warning Failed 21s (x2 over 7m29s) kubelet Failed to pull image "busybox:stable": fetching target platform image selected from image index: reading manifest sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 21s (x5 over 7m29s) kubelet Error: ErrImagePull
Normal BackOff 7s (x11 over 7m29s) kubelet Back-off pulling image "busybox:stable"
Warning Failed 7s (x11 over 7m29s) kubelet Error: ImagePullBackOff
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-d7qkm" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-dw6sn" not found
** /stderr **
helpers_test.go:287: kubectl --context addons-086339 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-d7qkm ingress-nginx-admission-patch-dw6sn: exit status 1
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-086339 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-086339 addons disable ingress-dns --alsologtostderr -v=1: (1.242394016s)
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-086339 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-086339 addons disable ingress --alsologtostderr -v=1: (7.787763763s)
--- FAIL: TestAddons/parallel/Ingress (491.94s)