=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run: kubectl --context addons-610936 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run: kubectl --context addons-610936 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run: kubectl --context addons-610936 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [d2369d8f-b848-4d1a-9e8f-e2845ef60291] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [d2369d8f-b848-4d1a-9e8f-e2845ef60291] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.006725127s
I1101 09:30:16.969554 348518 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run: out/minikube-linux-amd64 -p addons-610936 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-610936 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.855729072s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run: kubectl --context addons-610936 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run: out/minikube-linux-amd64 -p addons-610936 ip
addons_test.go:299: (dbg) Run: nslookup hello-john.test 192.168.39.81
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-610936 -n addons-610936
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p addons-610936 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-610936 logs -n 25: (1.619690025s)
helpers_test.go:260: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-662663 │ download-only-662663 │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
│ start │ --download-only -p binary-mirror-267138 --alsologtostderr --binary-mirror http://127.0.0.1:35611 --driver=kvm2 --container-runtime=crio │ binary-mirror-267138 │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ │
│ delete │ -p binary-mirror-267138 │ binary-mirror-267138 │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
│ addons │ disable dashboard -p addons-610936 │ addons-610936 │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ │
│ addons │ enable dashboard -p addons-610936 │ addons-610936 │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ │
│ start │ -p addons-610936 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-610936 │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:29 UTC │
│ addons │ addons-610936 addons disable volcano --alsologtostderr -v=1 │ addons-610936 │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
│ addons │ addons-610936 addons disable gcp-auth --alsologtostderr -v=1 │ addons-610936 │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
│ addons │ enable headlamp -p addons-610936 --alsologtostderr -v=1 │ addons-610936 │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
│ addons │ addons-610936 addons disable yakd --alsologtostderr -v=1 │ addons-610936 │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:30 UTC │
│ addons │ addons-610936 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-610936 │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
│ addons │ addons-610936 addons disable metrics-server --alsologtostderr -v=1 │ addons-610936 │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
│ ip │ addons-610936 ip │ addons-610936 │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
│ addons │ addons-610936 addons disable headlamp --alsologtostderr -v=1 │ addons-610936 │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
│ addons │ addons-610936 addons disable registry --alsologtostderr -v=1 │ addons-610936 │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
│ ssh │ addons-610936 ssh cat /opt/local-path-provisioner/pvc-479a1c05-a807-4c11-a5ef-bb253fe0f186_default_test-pvc/file1 │ addons-610936 │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
│ addons │ addons-610936 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-610936 │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
│ addons │ addons-610936 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-610936 │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-610936 │ addons-610936 │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
│ ssh │ addons-610936 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-610936 │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ │
│ addons │ addons-610936 addons disable registry-creds --alsologtostderr -v=1 │ addons-610936 │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
│ addons │ addons-610936 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-610936 │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
│ addons │ addons-610936 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-610936 │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
│ addons │ addons-610936 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-610936 │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
│ ip │ addons-610936 ip │ addons-610936 │ jenkins │ v1.37.0 │ 01 Nov 25 09:32 UTC │ 01 Nov 25 09:32 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/11/01 09:26:48
Running on machine: ubuntu-20-agent-15
Binary: Built with gc go1.24.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1101 09:26:48.167105 349088 out.go:360] Setting OutFile to fd 1 ...
I1101 09:26:48.167358 349088 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:26:48.167366 349088 out.go:374] Setting ErrFile to fd 2...
I1101 09:26:48.167370 349088 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:26:48.167565 349088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-344560/.minikube/bin
I1101 09:26:48.168108 349088 out.go:368] Setting JSON to false
I1101 09:26:48.169806 349088 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4156,"bootTime":1761985052,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1101 09:26:48.170059 349088 start.go:143] virtualization: kvm guest
I1101 09:26:48.171753 349088 out.go:179] * [addons-610936] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1101 09:26:48.173165 349088 out.go:179] - MINIKUBE_LOCATION=21832
I1101 09:26:48.173177 349088 notify.go:221] Checking for updates...
I1101 09:26:48.174607 349088 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1101 09:26:48.175976 349088 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21832-344560/kubeconfig
I1101 09:26:48.177208 349088 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-344560/.minikube
I1101 09:26:48.178346 349088 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1101 09:26:48.179555 349088 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1101 09:26:48.181019 349088 driver.go:422] Setting default libvirt URI to qemu:///system
I1101 09:26:48.212128 349088 out.go:179] * Using the kvm2 driver based on user configuration
I1101 09:26:48.213542 349088 start.go:309] selected driver: kvm2
I1101 09:26:48.213561 349088 start.go:930] validating driver "kvm2" against <nil>
I1101 09:26:48.213574 349088 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1101 09:26:48.214280 349088 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1101 09:26:48.214531 349088 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1101 09:26:48.214572 349088 cni.go:84] Creating CNI manager for ""
I1101 09:26:48.214647 349088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1101 09:26:48.214656 349088 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1101 09:26:48.214699 349088 start.go:353] cluster config:
{Name:addons-610936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-610936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1101 09:26:48.214803 349088 iso.go:125] acquiring lock: {Name:mkc74493fbbc2007c645c4ed6349cf76e7fb2185 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1101 09:26:48.217210 349088 out.go:179] * Starting "addons-610936" primary control-plane node in "addons-610936" cluster
I1101 09:26:48.218317 349088 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1101 09:26:48.218360 349088 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-344560/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
I1101 09:26:48.218369 349088 cache.go:59] Caching tarball of preloaded images
I1101 09:26:48.218474 349088 preload.go:233] Found /home/jenkins/minikube-integration/21832-344560/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1101 09:26:48.218485 349088 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
I1101 09:26:48.218827 349088 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/config.json ...
I1101 09:26:48.218853 349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/config.json: {Name:mk116c209680bfabd911f460b995157de8b4aa36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 09:26:48.219021 349088 start.go:360] acquireMachinesLock for addons-610936: {Name:mkd221a68334bc82c567a9a06c8563af1e1c38bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1101 09:26:48.219068 349088 start.go:364] duration metric: took 33.124µs to acquireMachinesLock for "addons-610936"
I1101 09:26:48.219087 349088 start.go:93] Provisioning new machine with config: &{Name:addons-610936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-610936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1101 09:26:48.219137 349088 start.go:125] createHost starting for "" (driver="kvm2")
I1101 09:26:48.221497 349088 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1101 09:26:48.221687 349088 start.go:159] libmachine.API.Create for "addons-610936" (driver="kvm2")
I1101 09:26:48.221717 349088 client.go:173] LocalClient.Create starting
I1101 09:26:48.221840 349088 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem
I1101 09:26:48.388426 349088 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/cert.pem
I1101 09:26:48.635134 349088 main.go:143] libmachine: creating domain...
I1101 09:26:48.635159 349088 main.go:143] libmachine: creating network...
I1101 09:26:48.636858 349088 main.go:143] libmachine: found existing default network
I1101 09:26:48.637100 349088 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1101 09:26:48.637762 349088 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001de6a30}
I1101 09:26:48.637859 349088 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-610936</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1101 09:26:48.644232 349088 main.go:143] libmachine: creating private network mk-addons-610936 192.168.39.0/24...
I1101 09:26:48.720421 349088 main.go:143] libmachine: private network mk-addons-610936 192.168.39.0/24 created
I1101 09:26:48.720825 349088 main.go:143] libmachine: <network>
<name>mk-addons-610936</name>
<uuid>c04680c9-4ec5-4b42-a8d4-fa5488b481f3</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:29:1d:e7'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1101 09:26:48.720884 349088 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936 ...
I1101 09:26:48.720914 349088 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21832-344560/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
I1101 09:26:48.720926 349088 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21832-344560/.minikube
I1101 09:26:48.721004 349088 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21832-344560/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21832-344560/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
I1101 09:26:48.997189 349088 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa...
I1101 09:26:49.157415 349088 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/addons-610936.rawdisk...
I1101 09:26:49.157462 349088 main.go:143] libmachine: Writing magic tar header
I1101 09:26:49.157488 349088 main.go:143] libmachine: Writing SSH key tar header
I1101 09:26:49.157566 349088 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936 ...
I1101 09:26:49.157634 349088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936
I1101 09:26:49.157660 349088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936 (perms=drwx------)
I1101 09:26:49.157672 349088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21832-344560/.minikube/machines
I1101 09:26:49.157682 349088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21832-344560/.minikube/machines (perms=drwxr-xr-x)
I1101 09:26:49.157694 349088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21832-344560/.minikube
I1101 09:26:49.157703 349088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21832-344560/.minikube (perms=drwxr-xr-x)
I1101 09:26:49.157714 349088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21832-344560
I1101 09:26:49.157723 349088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21832-344560 (perms=drwxrwxr-x)
I1101 09:26:49.157733 349088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1101 09:26:49.157743 349088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1101 09:26:49.157752 349088 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1101 09:26:49.157762 349088 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1101 09:26:49.157771 349088 main.go:143] libmachine: checking permissions on dir: /home
I1101 09:26:49.157785 349088 main.go:143] libmachine: skipping /home - not owner
I1101 09:26:49.157790 349088 main.go:143] libmachine: defining domain...
I1101 09:26:49.159224 349088 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-610936</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/addons-610936.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-610936'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1101 09:26:49.167197 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:15:bc:d9 in network default
I1101 09:26:49.167848 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:26:49.167888 349088 main.go:143] libmachine: starting domain...
I1101 09:26:49.167893 349088 main.go:143] libmachine: ensuring networks are active...
I1101 09:26:49.168767 349088 main.go:143] libmachine: Ensuring network default is active
I1101 09:26:49.169283 349088 main.go:143] libmachine: Ensuring network mk-addons-610936 is active
I1101 09:26:49.170094 349088 main.go:143] libmachine: getting domain XML...
I1101 09:26:49.171390 349088 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-610936</name>
<uuid>067cbdb7-aeda-471a-aaf4-ef736820bc12</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/addons-610936.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:ff:5a:50'/>
<source network='mk-addons-610936'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:15:bc:d9'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1101 09:26:50.610080 349088 main.go:143] libmachine: waiting for domain to start...
I1101 09:26:50.611613 349088 main.go:143] libmachine: domain is now running
I1101 09:26:50.611630 349088 main.go:143] libmachine: waiting for IP...
I1101 09:26:50.612434 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:26:50.612919 349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
I1101 09:26:50.612936 349088 main.go:143] libmachine: trying to list again with source=arp
I1101 09:26:50.613211 349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
I1101 09:26:50.613268 349088 retry.go:31] will retry after 191.100412ms: waiting for domain to come up
I1101 09:26:50.805816 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:26:50.806422 349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
I1101 09:26:50.806439 349088 main.go:143] libmachine: trying to list again with source=arp
I1101 09:26:50.806763 349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
I1101 09:26:50.806802 349088 retry.go:31] will retry after 376.554484ms: waiting for domain to come up
I1101 09:26:51.185497 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:26:51.186174 349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
I1101 09:26:51.186199 349088 main.go:143] libmachine: trying to list again with source=arp
I1101 09:26:51.186511 349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
I1101 09:26:51.186559 349088 retry.go:31] will retry after 420.878905ms: waiting for domain to come up
I1101 09:26:51.609310 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:26:51.609971 349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
I1101 09:26:51.609994 349088 main.go:143] libmachine: trying to list again with source=arp
I1101 09:26:51.610341 349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
I1101 09:26:51.610389 349088 retry.go:31] will retry after 566.258468ms: waiting for domain to come up
I1101 09:26:52.178431 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:26:52.179181 349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
I1101 09:26:52.179209 349088 main.go:143] libmachine: trying to list again with source=arp
I1101 09:26:52.179569 349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
I1101 09:26:52.179618 349088 retry.go:31] will retry after 510.874727ms: waiting for domain to come up
I1101 09:26:52.692621 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:26:52.693178 349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
I1101 09:26:52.693208 349088 main.go:143] libmachine: trying to list again with source=arp
I1101 09:26:52.693504 349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
I1101 09:26:52.693542 349088 retry.go:31] will retry after 644.803122ms: waiting for domain to come up
I1101 09:26:53.340554 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:26:53.341164 349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
I1101 09:26:53.341184 349088 main.go:143] libmachine: trying to list again with source=arp
I1101 09:26:53.341490 349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
I1101 09:26:53.341531 349088 retry.go:31] will retry after 1.023512628s: waiting for domain to come up
I1101 09:26:54.366813 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:26:54.367498 349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
I1101 09:26:54.367519 349088 main.go:143] libmachine: trying to list again with source=arp
I1101 09:26:54.367825 349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
I1101 09:26:54.367879 349088 retry.go:31] will retry after 1.39212269s: waiting for domain to come up
I1101 09:26:55.761274 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:26:55.761890 349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
I1101 09:26:55.761912 349088 main.go:143] libmachine: trying to list again with source=arp
I1101 09:26:55.762245 349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
I1101 09:26:55.762288 349088 retry.go:31] will retry after 1.430220685s: waiting for domain to come up
I1101 09:26:57.194971 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:26:57.195519 349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
I1101 09:26:57.195537 349088 main.go:143] libmachine: trying to list again with source=arp
I1101 09:26:57.195885 349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
I1101 09:26:57.195955 349088 retry.go:31] will retry after 2.020848163s: waiting for domain to come up
I1101 09:26:59.218180 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:26:59.218898 349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
I1101 09:26:59.218919 349088 main.go:143] libmachine: trying to list again with source=arp
I1101 09:26:59.219347 349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
I1101 09:26:59.219393 349088 retry.go:31] will retry after 2.273208384s: waiting for domain to come up
I1101 09:27:01.493989 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:01.494592 349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
I1101 09:27:01.494610 349088 main.go:143] libmachine: trying to list again with source=arp
I1101 09:27:01.494974 349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
I1101 09:27:01.495018 349088 retry.go:31] will retry after 3.392803853s: waiting for domain to come up
I1101 09:27:04.890722 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:04.891366 349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
I1101 09:27:04.891384 349088 main.go:143] libmachine: trying to list again with source=arp
I1101 09:27:04.891759 349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
I1101 09:27:04.891802 349088 retry.go:31] will retry after 4.312687921s: waiting for domain to come up
I1101 09:27:09.206313 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:09.206961 349088 main.go:143] libmachine: domain addons-610936 has current primary IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:09.206984 349088 main.go:143] libmachine: found domain IP: 192.168.39.81
I1101 09:27:09.206992 349088 main.go:143] libmachine: reserving static IP address...
I1101 09:27:09.207571 349088 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-610936", mac: "52:54:00:ff:5a:50", ip: "192.168.39.81"} in network mk-addons-610936
I1101 09:27:09.398745 349088 main.go:143] libmachine: reserved static IP address 192.168.39.81 for domain addons-610936
I1101 09:27:09.398795 349088 main.go:143] libmachine: waiting for SSH...
I1101 09:27:09.398806 349088 main.go:143] libmachine: Getting to WaitForSSH function...
I1101 09:27:09.402334 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:09.402881 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:09.402923 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:09.403182 349088 main.go:143] libmachine: Using SSH client type: native
I1101 09:27:09.403470 349088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.81 22 <nil> <nil>}
I1101 09:27:09.403485 349088 main.go:143] libmachine: About to run SSH command:
exit 0
I1101 09:27:09.508904 349088 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1101 09:27:09.509279 349088 main.go:143] libmachine: domain creation complete
I1101 09:27:09.510884 349088 machine.go:94] provisionDockerMachine start ...
I1101 09:27:09.513206 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:09.513568 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:09.513591 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:09.513799 349088 main.go:143] libmachine: Using SSH client type: native
I1101 09:27:09.514069 349088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.81 22 <nil> <nil>}
I1101 09:27:09.514083 349088 main.go:143] libmachine: About to run SSH command:
hostname
I1101 09:27:09.617282 349088 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1101 09:27:09.617319 349088 buildroot.go:166] provisioning hostname "addons-610936"
I1101 09:27:09.620116 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:09.620592 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:09.620626 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:09.620836 349088 main.go:143] libmachine: Using SSH client type: native
I1101 09:27:09.621089 349088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.81 22 <nil> <nil>}
I1101 09:27:09.621105 349088 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-610936 && echo "addons-610936" | sudo tee /etc/hostname
I1101 09:27:09.747625 349088 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-610936
I1101 09:27:09.750468 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:09.751026 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:09.751064 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:09.751283 349088 main.go:143] libmachine: Using SSH client type: native
I1101 09:27:09.751531 349088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.81 22 <nil> <nil>}
I1101 09:27:09.751555 349088 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-610936' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-610936/g' /etc/hosts;
else
echo '127.0.1.1 addons-610936' | sudo tee -a /etc/hosts;
fi
fi
I1101 09:27:09.867133 349088 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1101 09:27:09.867168 349088 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21832-344560/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-344560/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-344560/.minikube}
I1101 09:27:09.867193 349088 buildroot.go:174] setting up certificates
I1101 09:27:09.867211 349088 provision.go:84] configureAuth start
I1101 09:27:09.870717 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:09.871266 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:09.871291 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:09.874072 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:09.874675 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:09.874720 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:09.874997 349088 provision.go:143] copyHostCerts
I1101 09:27:09.875078 349088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-344560/.minikube/ca.pem (1082 bytes)
I1101 09:27:09.875223 349088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-344560/.minikube/cert.pem (1123 bytes)
I1101 09:27:09.875291 349088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-344560/.minikube/key.pem (1679 bytes)
I1101 09:27:09.875382 349088 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-344560/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca-key.pem org=jenkins.addons-610936 san=[127.0.0.1 192.168.39.81 addons-610936 localhost minikube]
I1101 09:27:09.989492 349088 provision.go:177] copyRemoteCerts
I1101 09:27:09.989556 349088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1101 09:27:09.992515 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:09.992931 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:09.992954 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:09.993174 349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
I1101 09:27:10.076686 349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1101 09:27:10.110156 349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1101 09:27:10.144397 349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1101 09:27:10.176734 349088 provision.go:87] duration metric: took 309.504075ms to configureAuth
I1101 09:27:10.176769 349088 buildroot.go:189] setting minikube options for container-runtime
I1101 09:27:10.176994 349088 config.go:182] Loaded profile config "addons-610936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:27:10.180094 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:10.180526 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:10.180576 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:10.180772 349088 main.go:143] libmachine: Using SSH client type: native
I1101 09:27:10.181020 349088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.81 22 <nil> <nil>}
I1101 09:27:10.181044 349088 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1101 09:27:10.423886 349088 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1101 09:27:10.423927 349088 machine.go:97] duration metric: took 913.019036ms to provisionDockerMachine
I1101 09:27:10.423960 349088 client.go:176] duration metric: took 22.202220225s to LocalClient.Create
I1101 09:27:10.423984 349088 start.go:167] duration metric: took 22.202306595s to libmachine.API.Create "addons-610936"
I1101 09:27:10.423995 349088 start.go:293] postStartSetup for "addons-610936" (driver="kvm2")
I1101 09:27:10.424021 349088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1101 09:27:10.424113 349088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1101 09:27:10.427157 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:10.427601 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:10.427632 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:10.427844 349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
I1101 09:27:10.511498 349088 ssh_runner.go:195] Run: cat /etc/os-release
I1101 09:27:10.517271 349088 info.go:137] Remote host: Buildroot 2025.02
I1101 09:27:10.517302 349088 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-344560/.minikube/addons for local assets ...
I1101 09:27:10.517385 349088 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-344560/.minikube/files for local assets ...
I1101 09:27:10.517414 349088 start.go:296] duration metric: took 93.412558ms for postStartSetup
I1101 09:27:10.520815 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:10.521283 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:10.521311 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:10.521634 349088 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/config.json ...
I1101 09:27:10.521902 349088 start.go:128] duration metric: took 22.302751877s to createHost
I1101 09:27:10.524323 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:10.524907 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:10.524931 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:10.525104 349088 main.go:143] libmachine: Using SSH client type: native
I1101 09:27:10.525313 349088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil> [] 0s} 192.168.39.81 22 <nil> <nil>}
I1101 09:27:10.525323 349088 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1101 09:27:10.630156 349088 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761989230.587115009
I1101 09:27:10.630181 349088 fix.go:216] guest clock: 1761989230.587115009
I1101 09:27:10.630189 349088 fix.go:229] Guest: 2025-11-01 09:27:10.587115009 +0000 UTC Remote: 2025-11-01 09:27:10.521918664 +0000 UTC m=+22.404168301 (delta=65.196345ms)
I1101 09:27:10.630208 349088 fix.go:200] guest clock delta is within tolerance: 65.196345ms
I1101 09:27:10.630214 349088 start.go:83] releasing machines lock for "addons-610936", held for 22.411135579s
I1101 09:27:10.633362 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:10.633787 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:10.633814 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:10.634490 349088 ssh_runner.go:195] Run: cat /version.json
I1101 09:27:10.634691 349088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1101 09:27:10.637655 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:10.638048 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:10.638073 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:10.638091 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:10.638260 349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
I1101 09:27:10.638636 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:10.638668 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:10.638882 349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
I1101 09:27:10.726485 349088 ssh_runner.go:195] Run: systemctl --version
I1101 09:27:10.753372 349088 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1101 09:27:10.918384 349088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1101 09:27:10.926453 349088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1101 09:27:10.926532 349088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1101 09:27:10.953477 349088 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1101 09:27:10.953509 349088 start.go:496] detecting cgroup driver to use...
I1101 09:27:10.953584 349088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1101 09:27:10.975497 349088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1101 09:27:10.993511 349088 docker.go:218] disabling cri-docker service (if available) ...
I1101 09:27:10.993614 349088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1101 09:27:11.013163 349088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1101 09:27:11.031045 349088 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1101 09:27:11.180352 349088 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1101 09:27:11.402043 349088 docker.go:234] disabling docker service ...
I1101 09:27:11.402149 349088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1101 09:27:11.421224 349088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1101 09:27:11.438153 349088 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1101 09:27:11.600805 349088 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1101 09:27:11.754881 349088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1101 09:27:11.771449 349088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1101 09:27:11.797432 349088 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1101 09:27:11.797544 349088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1101 09:27:11.812142 349088 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1101 09:27:11.812249 349088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1101 09:27:11.826346 349088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1101 09:27:11.841711 349088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1101 09:27:11.855380 349088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1101 09:27:11.869917 349088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1101 09:27:11.884150 349088 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1101 09:27:11.906530 349088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1101 09:27:11.920203 349088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1101 09:27:11.932360 349088 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1101 09:27:11.932437 349088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1101 09:27:11.954832 349088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1101 09:27:11.968256 349088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 09:27:12.115585 349088 ssh_runner.go:195] Run: sudo systemctl restart crio
I1101 09:27:12.234503 349088 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1101 09:27:12.234602 349088 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1101 09:27:12.240643 349088 start.go:564] Will wait 60s for crictl version
I1101 09:27:12.240732 349088 ssh_runner.go:195] Run: which crictl
I1101 09:27:12.245393 349088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1101 09:27:12.291466 349088 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1101 09:27:12.291608 349088 ssh_runner.go:195] Run: crio --version
I1101 09:27:12.323851 349088 ssh_runner.go:195] Run: crio --version
I1101 09:27:12.358425 349088 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
I1101 09:27:12.362465 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:12.362850 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:12.362882 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:12.363077 349088 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1101 09:27:12.368326 349088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1101 09:27:12.385147 349088 kubeadm.go:884] updating cluster {Name:addons-610936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-610936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1101 09:27:12.385306 349088 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1101 09:27:12.385374 349088 ssh_runner.go:195] Run: sudo crictl images --output json
I1101 09:27:12.428654 349088 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
I1101 09:27:12.428761 349088 ssh_runner.go:195] Run: which lz4
I1101 09:27:12.433783 349088 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1101 09:27:12.439050 349088 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1101 09:27:12.439091 349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
I1101 09:27:14.042686 349088 crio.go:462] duration metric: took 1.60892747s to copy over tarball
I1101 09:27:14.042766 349088 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1101 09:27:15.917932 349088 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.875136305s)
I1101 09:27:15.917968 349088 crio.go:469] duration metric: took 1.875249656s to extract the tarball
I1101 09:27:15.917983 349088 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1101 09:27:15.960792 349088 ssh_runner.go:195] Run: sudo crictl images --output json
I1101 09:27:16.009430 349088 crio.go:514] all images are preloaded for cri-o runtime.
I1101 09:27:16.009457 349088 cache_images.go:86] Images are preloaded, skipping loading
I1101 09:27:16.009466 349088 kubeadm.go:935] updating node { 192.168.39.81 8443 v1.34.1 crio true true} ...
I1101 09:27:16.009578 349088 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-610936 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.81
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:addons-610936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1101 09:27:16.009675 349088 ssh_runner.go:195] Run: crio config
I1101 09:27:16.060176 349088 cni.go:84] Creating CNI manager for ""
I1101 09:27:16.060212 349088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1101 09:27:16.060242 349088 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1101 09:27:16.060276 349088 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.81 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-610936 NodeName:addons-610936 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1101 09:27:16.060445 349088 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.81
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-610936"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.81"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.81"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1101 09:27:16.060527 349088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1101 09:27:16.074680 349088 binaries.go:44] Found k8s binaries, skipping transfer
I1101 09:27:16.074776 349088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1101 09:27:16.087881 349088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I1101 09:27:16.111202 349088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1101 09:27:16.133656 349088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
I1101 09:27:16.156714 349088 ssh_runner.go:195] Run: grep 192.168.39.81 control-plane.minikube.internal$ /etc/hosts
I1101 09:27:16.161539 349088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.81 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1101 09:27:16.178210 349088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 09:27:16.328129 349088 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1101 09:27:16.365521 349088 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936 for IP: 192.168.39.81
I1101 09:27:16.365546 349088 certs.go:195] generating shared ca certs ...
I1101 09:27:16.365564 349088 certs.go:227] acquiring lock for ca certs: {Name:mkba0fe79f6b0ed99353299aaf34c6fbc547c6f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 09:27:16.365755 349088 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-344560/.minikube/ca.key
I1101 09:27:16.744900 349088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-344560/.minikube/ca.crt ...
I1101 09:27:16.744937 349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/ca.crt: {Name:mk70cb9468642ed5e7f9912a400b1e74296dea21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 09:27:16.745125 349088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-344560/.minikube/ca.key ...
I1101 09:27:16.745142 349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/ca.key: {Name:mked04b0822cde1b132009ea6307ff8ea52511e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 09:27:16.745220 349088 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.key
I1101 09:27:16.916593 349088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.crt ...
I1101 09:27:16.916628 349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.crt: {Name:mk898d13bfe08ac956aa016515b4e39e57dce709 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 09:27:16.916816 349088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.key ...
I1101 09:27:16.916828 349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.key: {Name:mk881f64e8f0f9e8118c2ea53f7a353ac29f8b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 09:27:16.916913 349088 certs.go:257] generating profile certs ...
I1101 09:27:16.916976 349088 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.key
I1101 09:27:16.916991 349088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt with IP's: []
I1101 09:27:17.062434 349088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt ...
I1101 09:27:17.062464 349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: {Name:mk4a0448dcedd6f68d492b4d5f914e5cca0df07b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 09:27:17.062634 349088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.key ...
I1101 09:27:17.062646 349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.key: {Name:mk8a00c8e5b18bb947e29b9b32095da84b4faa70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 09:27:17.062726 349088 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.key.3a89fe33
I1101 09:27:17.062744 349088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.crt.3a89fe33 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.81]
I1101 09:27:17.220204 349088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.crt.3a89fe33 ...
I1101 09:27:17.220242 349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.crt.3a89fe33: {Name:mk6e5f9fc47945ea3e26016859030a8f20a5f7ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 09:27:17.220428 349088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.key.3a89fe33 ...
I1101 09:27:17.220442 349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.key.3a89fe33: {Name:mk0b48ff33f7be98383eb1c773640c67bdeb8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 09:27:17.220515 349088 certs.go:382] copying /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.crt.3a89fe33 -> /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.crt
I1101 09:27:17.220593 349088 certs.go:386] copying /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.key.3a89fe33 -> /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.key
I1101 09:27:17.220642 349088 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/proxy-client.key
I1101 09:27:17.220664 349088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/proxy-client.crt with IP's: []
I1101 09:27:17.957328 349088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/proxy-client.crt ...
I1101 09:27:17.957365 349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/proxy-client.crt: {Name:mk4099c959afd20f992944add321fedf171c1f59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 09:27:17.957555 349088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/proxy-client.key ...
I1101 09:27:17.957571 349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/proxy-client.key: {Name:mk6dde8185d059ceb1f1fb5e409351057e2783ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 09:27:17.957764 349088 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca-key.pem (1675 bytes)
I1101 09:27:17.957801 349088 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem (1082 bytes)
I1101 09:27:17.957828 349088 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/cert.pem (1123 bytes)
I1101 09:27:17.957849 349088 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/key.pem (1679 bytes)
I1101 09:27:17.958435 349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1101 09:27:18.006290 349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1101 09:27:18.049283 349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1101 09:27:18.084041 349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1101 09:27:18.118882 349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1101 09:27:18.152497 349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1101 09:27:18.187113 349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1101 09:27:18.221898 349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1101 09:27:18.257661 349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1101 09:27:18.292943 349088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1101 09:27:18.316398 349088 ssh_runner.go:195] Run: openssl version
I1101 09:27:18.324206 349088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1101 09:27:18.338607 349088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1101 09:27:18.344696 349088 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 1 09:27 /usr/share/ca-certificates/minikubeCA.pem
I1101 09:27:18.344792 349088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1101 09:27:18.353174 349088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1101 09:27:18.368244 349088 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1101 09:27:18.374223 349088 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1101 09:27:18.374290 349088 kubeadm.go:401] StartCluster: {Name:addons-610936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-610936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1101 09:27:18.374380 349088 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1101 09:27:18.374487 349088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1101 09:27:18.418220 349088 cri.go:89] found id: ""
I1101 09:27:18.418311 349088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1101 09:27:18.431638 349088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1101 09:27:18.445051 349088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1101 09:27:18.458177 349088 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1101 09:27:18.458202 349088 kubeadm.go:158] found existing configuration files:
I1101 09:27:18.458256 349088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1101 09:27:18.471640 349088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1101 09:27:18.471726 349088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1101 09:27:18.485639 349088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1101 09:27:18.498284 349088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1101 09:27:18.498356 349088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1101 09:27:18.512788 349088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1101 09:27:18.526068 349088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1101 09:27:18.526134 349088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1101 09:27:18.539786 349088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1101 09:27:18.555113 349088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1101 09:27:18.555217 349088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1101 09:27:18.571565 349088 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1101 09:27:18.762017 349088 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1101 09:27:31.531695 349088 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
I1101 09:27:31.531837 349088 kubeadm.go:319] [preflight] Running pre-flight checks
I1101 09:27:31.532005 349088 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1101 09:27:31.532103 349088 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1101 09:27:31.532230 349088 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1101 09:27:31.532316 349088 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1101 09:27:31.535138 349088 out.go:252] - Generating certificates and keys ...
I1101 09:27:31.535262 349088 kubeadm.go:319] [certs] Using existing ca certificate authority
I1101 09:27:31.535356 349088 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1101 09:27:31.535456 349088 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1101 09:27:31.535514 349088 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1101 09:27:31.535563 349088 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1101 09:27:31.535608 349088 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1101 09:27:31.535652 349088 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1101 09:27:31.535753 349088 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-610936 localhost] and IPs [192.168.39.81 127.0.0.1 ::1]
I1101 09:27:31.535803 349088 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1101 09:27:31.535929 349088 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-610936 localhost] and IPs [192.168.39.81 127.0.0.1 ::1]
I1101 09:27:31.535987 349088 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1101 09:27:31.536038 349088 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1101 09:27:31.536075 349088 kubeadm.go:319] [certs] Generating "sa" key and public key
I1101 09:27:31.536122 349088 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1101 09:27:31.536164 349088 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1101 09:27:31.536211 349088 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1101 09:27:31.536259 349088 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1101 09:27:31.536326 349088 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1101 09:27:31.536392 349088 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1101 09:27:31.536465 349088 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1101 09:27:31.536527 349088 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1101 09:27:31.537895 349088 out.go:252] - Booting up control plane ...
I1101 09:27:31.537990 349088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1101 09:27:31.538063 349088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1101 09:27:31.538121 349088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1101 09:27:31.538211 349088 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1101 09:27:31.538300 349088 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1101 09:27:31.538394 349088 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1101 09:27:31.538469 349088 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1101 09:27:31.538504 349088 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1101 09:27:31.538702 349088 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1101 09:27:31.538838 349088 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1101 09:27:31.538912 349088 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00146068s
I1101 09:27:31.539018 349088 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1101 09:27:31.539146 349088 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.81:8443/livez
I1101 09:27:31.539227 349088 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1101 09:27:31.539296 349088 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1101 09:27:31.539356 349088 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.713243058s
I1101 09:27:31.539429 349088 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.738987655s
I1101 09:27:31.539508 349088 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.004913886s
I1101 09:27:31.539631 349088 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1101 09:27:31.539734 349088 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1101 09:27:31.539786 349088 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1101 09:27:31.539971 349088 kubeadm.go:319] [mark-control-plane] Marking the node addons-610936 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1101 09:27:31.540031 349088 kubeadm.go:319] [bootstrap-token] Using token: hxtxuv.39vanw3sg4xqodfn
I1101 09:27:31.541457 349088 out.go:252] - Configuring RBAC rules ...
I1101 09:27:31.541610 349088 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1101 09:27:31.541720 349088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1101 09:27:31.541880 349088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1101 09:27:31.542033 349088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1101 09:27:31.542167 349088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1101 09:27:31.542271 349088 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1101 09:27:31.542400 349088 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1101 09:27:31.542466 349088 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1101 09:27:31.542522 349088 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1101 09:27:31.542535 349088 kubeadm.go:319]
I1101 09:27:31.542582 349088 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1101 09:27:31.542597 349088 kubeadm.go:319]
I1101 09:27:31.542721 349088 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1101 09:27:31.542738 349088 kubeadm.go:319]
I1101 09:27:31.542770 349088 kubeadm.go:319] mkdir -p $HOME/.kube
I1101 09:27:31.542822 349088 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1101 09:27:31.542887 349088 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1101 09:27:31.542899 349088 kubeadm.go:319]
I1101 09:27:31.542944 349088 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1101 09:27:31.542951 349088 kubeadm.go:319]
I1101 09:27:31.542991 349088 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1101 09:27:31.542996 349088 kubeadm.go:319]
I1101 09:27:31.543037 349088 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1101 09:27:31.543129 349088 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1101 09:27:31.543222 349088 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1101 09:27:31.543232 349088 kubeadm.go:319]
I1101 09:27:31.543333 349088 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1101 09:27:31.543409 349088 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1101 09:27:31.543416 349088 kubeadm.go:319]
I1101 09:27:31.543483 349088 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hxtxuv.39vanw3sg4xqodfn \
I1101 09:27:31.543568 349088 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:8453eb9bfec31a6f8a04d37b2b2ee7df64866720c9de26f8457973b66dd9966b \
I1101 09:27:31.543594 349088 kubeadm.go:319] --control-plane
I1101 09:27:31.543598 349088 kubeadm.go:319]
I1101 09:27:31.543663 349088 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1101 09:27:31.543669 349088 kubeadm.go:319]
I1101 09:27:31.543771 349088 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hxtxuv.39vanw3sg4xqodfn \
I1101 09:27:31.543948 349088 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:8453eb9bfec31a6f8a04d37b2b2ee7df64866720c9de26f8457973b66dd9966b
I1101 09:27:31.543974 349088 cni.go:84] Creating CNI manager for ""
I1101 09:27:31.543987 349088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1101 09:27:31.545681 349088 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1101 09:27:31.547280 349088 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1101 09:27:31.566888 349088 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1101 09:27:31.592379 349088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1101 09:27:31.592444 349088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 09:27:31.592477 349088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-610936 minikube.k8s.io/updated_at=2025_11_01T09_27_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d minikube.k8s.io/name=addons-610936 minikube.k8s.io/primary=true
I1101 09:27:31.738248 349088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 09:27:31.827104 349088 ops.go:34] apiserver oom_adj: -16
I1101 09:27:32.239332 349088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 09:27:32.738661 349088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 09:27:33.238579 349088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 09:27:33.738989 349088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 09:27:34.238462 349088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 09:27:34.739204 349088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 09:27:35.238611 349088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 09:27:35.331026 349088 kubeadm.go:1114] duration metric: took 3.738648845s to wait for elevateKubeSystemPrivileges
I1101 09:27:35.331104 349088 kubeadm.go:403] duration metric: took 16.956793709s to StartCluster
I1101 09:27:35.331134 349088 settings.go:142] acquiring lock: {Name:mk0cdfdd584044c1d93f88e46e35ef3af10fed81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 09:27:35.331283 349088 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21832-344560/kubeconfig
I1101 09:27:35.331763 349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/kubeconfig: {Name:mkaf75364e29c8ee4b260af678d355333969cf4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 09:27:35.332032 349088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1101 09:27:35.332033 349088 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1101 09:27:35.332067 349088 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1101 09:27:35.332278 349088 addons.go:70] Setting yakd=true in profile "addons-610936"
I1101 09:27:35.332287 349088 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-610936"
I1101 09:27:35.332298 349088 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-610936"
I1101 09:27:35.332309 349088 addons.go:70] Setting registry=true in profile "addons-610936"
I1101 09:27:35.332319 349088 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-610936"
I1101 09:27:35.332321 349088 addons.go:239] Setting addon registry=true in "addons-610936"
I1101 09:27:35.332351 349088 host.go:66] Checking if "addons-610936" exists ...
I1101 09:27:35.332353 349088 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-610936"
I1101 09:27:35.332364 349088 addons.go:70] Setting default-storageclass=true in profile "addons-610936"
I1101 09:27:35.332363 349088 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-610936"
I1101 09:27:35.332381 349088 host.go:66] Checking if "addons-610936" exists ...
I1101 09:27:35.332388 349088 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-610936"
I1101 09:27:35.332390 349088 addons.go:70] Setting gcp-auth=true in profile "addons-610936"
I1101 09:27:35.332409 349088 mustload.go:66] Loading cluster: addons-610936
I1101 09:27:35.332398 349088 addons.go:70] Setting cloud-spanner=true in profile "addons-610936"
I1101 09:27:35.332302 349088 addons.go:239] Setting addon yakd=true in "addons-610936"
I1101 09:27:35.332444 349088 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-610936"
I1101 09:27:35.332451 349088 host.go:66] Checking if "addons-610936" exists ...
I1101 09:27:35.332456 349088 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-610936"
I1101 09:27:35.332464 349088 addons.go:70] Setting ingress-dns=true in profile "addons-610936"
I1101 09:27:35.332531 349088 addons.go:239] Setting addon ingress-dns=true in "addons-610936"
I1101 09:27:35.332567 349088 config.go:182] Loaded profile config "addons-610936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:27:35.332571 349088 host.go:66] Checking if "addons-610936" exists ...
I1101 09:27:35.333079 349088 addons.go:70] Setting ingress=true in profile "addons-610936"
I1101 09:27:35.333102 349088 addons.go:239] Setting addon ingress=true in "addons-610936"
I1101 09:27:35.333135 349088 host.go:66] Checking if "addons-610936" exists ...
I1101 09:27:35.333187 349088 addons.go:70] Setting registry-creds=true in profile "addons-610936"
I1101 09:27:35.333216 349088 addons.go:239] Setting addon registry-creds=true in "addons-610936"
I1101 09:27:35.333221 349088 addons.go:70] Setting storage-provisioner=true in profile "addons-610936"
I1101 09:27:35.333240 349088 addons.go:239] Setting addon storage-provisioner=true in "addons-610936"
I1101 09:27:35.333269 349088 host.go:66] Checking if "addons-610936" exists ...
I1101 09:27:35.333286 349088 addons.go:70] Setting inspektor-gadget=true in profile "addons-610936"
I1101 09:27:35.333301 349088 addons.go:239] Setting addon inspektor-gadget=true in "addons-610936"
I1101 09:27:35.333318 349088 host.go:66] Checking if "addons-610936" exists ...
I1101 09:27:35.332380 349088 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-610936"
I1101 09:27:35.332355 349088 host.go:66] Checking if "addons-610936" exists ...
I1101 09:27:35.332279 349088 config.go:182] Loaded profile config "addons-610936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:27:35.334158 349088 addons.go:70] Setting volcano=true in profile "addons-610936"
I1101 09:27:35.334179 349088 addons.go:239] Setting addon volcano=true in "addons-610936"
I1101 09:27:35.334203 349088 host.go:66] Checking if "addons-610936" exists ...
I1101 09:27:35.332414 349088 host.go:66] Checking if "addons-610936" exists ...
I1101 09:27:35.334441 349088 addons.go:70] Setting metrics-server=true in profile "addons-610936"
I1101 09:27:35.334463 349088 addons.go:239] Setting addon metrics-server=true in "addons-610936"
I1101 09:27:35.334487 349088 host.go:66] Checking if "addons-610936" exists ...
I1101 09:27:35.334591 349088 addons.go:70] Setting volumesnapshots=true in profile "addons-610936"
I1101 09:27:35.334629 349088 addons.go:239] Setting addon volumesnapshots=true in "addons-610936"
I1101 09:27:35.334654 349088 out.go:179] * Verifying Kubernetes components...
I1101 09:27:35.334657 349088 host.go:66] Checking if "addons-610936" exists ...
I1101 09:27:35.333270 349088 host.go:66] Checking if "addons-610936" exists ...
I1101 09:27:35.332432 349088 addons.go:239] Setting addon cloud-spanner=true in "addons-610936"
I1101 09:27:35.334859 349088 host.go:66] Checking if "addons-610936" exists ...
I1101 09:27:35.336727 349088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 09:27:35.338761 349088 host.go:66] Checking if "addons-610936" exists ...
I1101 09:27:35.341333 349088 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-610936"
I1101 09:27:35.341384 349088 host.go:66] Checking if "addons-610936" exists ...
W1101 09:27:35.342991 349088 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1101 09:27:35.343436 349088 addons.go:239] Setting addon default-storageclass=true in "addons-610936"
I1101 09:27:35.343479 349088 host.go:66] Checking if "addons-610936" exists ...
I1101 09:27:35.344044 349088 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1101 09:27:35.344057 349088 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1101 09:27:35.344077 349088 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1101 09:27:35.344124 349088 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
I1101 09:27:35.344137 349088 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1101 09:27:35.345201 349088 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1101 09:27:35.345206 349088 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1101 09:27:35.345206 349088 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1101 09:27:35.345217 349088 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1101 09:27:35.345216 349088 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1101 09:27:35.345230 349088 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
I1101 09:27:35.345246 349088 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
I1101 09:27:35.345770 349088 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1101 09:27:35.346847 349088 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1101 09:27:35.347023 349088 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1101 09:27:35.347049 349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1101 09:27:35.346898 349088 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1101 09:27:35.347151 349088 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1101 09:27:35.346948 349088 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
I1101 09:27:35.347694 349088 out.go:179] - Using image docker.io/busybox:stable
I1101 09:27:35.347704 349088 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1101 09:27:35.348099 349088 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1101 09:27:35.347745 349088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1101 09:27:35.348195 349088 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1101 09:27:35.347766 349088 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1101 09:27:35.348271 349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1101 09:27:35.348508 349088 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1101 09:27:35.348551 349088 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1101 09:27:35.348557 349088 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1101 09:27:35.348565 349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1101 09:27:35.348568 349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1101 09:27:35.348638 349088 out.go:179] - Using image docker.io/registry:3.0.0
I1101 09:27:35.348643 349088 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
I1101 09:27:35.348757 349088 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1101 09:27:35.348763 349088 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
I1101 09:27:35.348767 349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1101 09:27:35.348885 349088 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
I1101 09:27:35.349507 349088 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1101 09:27:35.349528 349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1101 09:27:35.350226 349088 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1101 09:27:35.350242 349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1101 09:27:35.350843 349088 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1101 09:27:35.350875 349088 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1101 09:27:35.351638 349088 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
I1101 09:27:35.353581 349088 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1101 09:27:35.353647 349088 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1101 09:27:35.353662 349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1101 09:27:35.353731 349088 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1101 09:27:35.353744 349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1101 09:27:35.356403 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.356541 349088 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1101 09:27:35.357052 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.357229 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.358212 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:35.358255 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.358556 349088 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1101 09:27:35.359462 349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
I1101 09:27:35.359589 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:35.359618 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.359726 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.359805 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.359276 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:35.360016 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.360247 349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
I1101 09:27:35.360847 349088 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1101 09:27:35.361116 349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
I1101 09:27:35.361266 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.361851 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:35.361899 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.361913 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:35.361957 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.362180 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.362453 349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
I1101 09:27:35.362578 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.362699 349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
I1101 09:27:35.362843 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.362999 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:35.363031 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.363081 349088 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1101 09:27:35.363766 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.363800 349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
I1101 09:27:35.364037 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.364074 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:35.364100 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.364190 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:35.364227 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.364237 349088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1101 09:27:35.364252 349088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1101 09:27:35.364330 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:35.364364 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.364827 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.364879 349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
I1101 09:27:35.364992 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:35.365006 349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
I1101 09:27:35.365026 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.365117 349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
I1101 09:27:35.365416 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:35.365451 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.365501 349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
I1101 09:27:35.365680 349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
I1101 09:27:35.365846 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:35.365893 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.366234 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.366372 349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
I1101 09:27:35.366394 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.366787 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:35.366826 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:35.366844 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.366854 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.367138 349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
I1101 09:27:35.367141 349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
I1101 09:27:35.368547 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.369191 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:35.369225 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:35.369389 349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
W1101 09:27:35.729128 349088 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:59904->192.168.39.81:22: read: connection reset by peer
I1101 09:27:35.729171 349088 retry.go:31] will retry after 190.903161ms: ssh: handshake failed: read tcp 192.168.39.1:59904->192.168.39.81:22: read: connection reset by peer
I1101 09:27:36.169539 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1101 09:27:36.172047 349088 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1101 09:27:36.172080 349088 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1101 09:27:36.197501 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1101 09:27:36.215417 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1101 09:27:36.216345 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1101 09:27:36.218262 349088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1101 09:27:36.218284 349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1101 09:27:36.233277 349088 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:27:36.233306 349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1101 09:27:36.276463 349088 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1101 09:27:36.276504 349088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1101 09:27:36.319910 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1101 09:27:36.401223 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1101 09:27:36.402616 349088 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1101 09:27:36.402645 349088 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1101 09:27:36.474457 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1101 09:27:36.545082 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1101 09:27:36.565365 349088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1101 09:27:36.565405 349088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1101 09:27:36.716748 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1101 09:27:36.757480 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:27:36.822242 349088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1101 09:27:36.822276 349088 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1101 09:27:36.899735 349088 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1101 09:27:36.899768 349088 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1101 09:27:37.256955 349088 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1101 09:27:37.256994 349088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1101 09:27:37.400065 349088 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1101 09:27:37.400097 349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1101 09:27:37.407524 349088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1101 09:27:37.407553 349088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1101 09:27:37.688784 349088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1101 09:27:37.688814 349088 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1101 09:27:37.822713 349088 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1101 09:27:37.822758 349088 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1101 09:27:37.951507 349088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1101 09:27:37.951539 349088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1101 09:27:38.005672 349088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1101 09:27:38.005711 349088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1101 09:27:38.029745 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1101 09:27:38.323329 349088 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1101 09:27:38.323367 349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1101 09:27:38.456225 349088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1101 09:27:38.456256 349088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1101 09:27:38.460477 349088 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1101 09:27:38.460514 349088 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1101 09:27:38.500149 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1101 09:27:38.728678 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1101 09:27:38.896168 349088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1101 09:27:38.896207 349088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1101 09:27:38.896644 349088 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1101 09:27:38.896672 349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1101 09:27:39.290048 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.120463391s)
I1101 09:27:39.290105 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.0925653s)
I1101 09:27:39.290134 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.07376527s)
I1101 09:27:39.291321 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1101 09:27:39.472727 349088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1101 09:27:39.472764 349088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1101 09:27:40.004706 349088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1101 09:27:40.004742 349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1101 09:27:40.542351 349088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1101 09:27:40.542382 349088 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1101 09:27:41.141184 349088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1101 09:27:41.141212 349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1101 09:27:41.601642 349088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1101 09:27:41.601674 349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1101 09:27:42.325101 349088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1101 09:27:42.325139 349088 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1101 09:27:42.613924 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.398452068s)
I1101 09:27:42.613977 349088 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.337439597s)
I1101 09:27:42.614004 349088 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1101 09:27:42.614023 349088 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.337520699s)
I1101 09:27:42.614092 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.294151559s)
I1101 09:27:42.614972 349088 node_ready.go:35] waiting up to 6m0s for node "addons-610936" to be "Ready" ...
I1101 09:27:42.637786 349088 node_ready.go:49] node "addons-610936" is "Ready"
I1101 09:27:42.637826 349088 node_ready.go:38] duration metric: took 22.817502ms for node "addons-610936" to be "Ready" ...
I1101 09:27:42.637844 349088 api_server.go:52] waiting for apiserver process to appear ...
I1101 09:27:42.637919 349088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 09:27:42.790062 349088 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1101 09:27:42.793672 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:42.794246 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:42.794278 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:42.794489 349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
I1101 09:27:42.852441 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1101 09:27:43.118636 349088 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-610936" context rescaled to 1 replicas
I1101 09:27:43.411524 349088 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1101 09:27:43.858740 349088 addons.go:239] Setting addon gcp-auth=true in "addons-610936"
I1101 09:27:43.858802 349088 host.go:66] Checking if "addons-610936" exists ...
I1101 09:27:43.860804 349088 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1101 09:27:43.863633 349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:43.864100 349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
I1101 09:27:43.864124 349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
I1101 09:27:43.864271 349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
I1101 09:27:45.858778 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.457511483s)
I1101 09:27:45.858837 349088 addons.go:480] Verifying addon ingress=true in "addons-610936"
I1101 09:27:45.858849 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.384349472s)
I1101 09:27:45.858915 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.313798771s)
I1101 09:27:45.858959 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.14217526s)
I1101 09:27:45.859051 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (9.101542047s)
W1101 09:27:45.859083 349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget created
serviceaccount/gadget created
configmap/gadget created
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
role.rbac.authorization.k8s.io/gadget-role created
rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
daemonset.apps/gadget created
stderr:
Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:27:45.859107 349088 retry.go:31] will retry after 325.822187ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget created
serviceaccount/gadget created
configmap/gadget created
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
role.rbac.authorization.k8s.io/gadget-role created
rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
daemonset.apps/gadget created
stderr:
Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:27:45.859161 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.829382241s)
I1101 09:27:45.859198 349088 addons.go:480] Verifying addon registry=true in "addons-610936"
I1101 09:27:45.859336 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.130621873s)
I1101 09:27:45.859292 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.3591024s)
I1101 09:27:45.859422 349088 addons.go:480] Verifying addon metrics-server=true in "addons-610936"
I1101 09:27:45.860418 349088 out.go:179] * Verifying ingress addon...
I1101 09:27:45.861227 349088 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-610936 service yakd-dashboard -n yakd-dashboard
I1101 09:27:45.861262 349088 out.go:179] * Verifying registry addon...
I1101 09:27:45.862656 349088 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1101 09:27:45.863701 349088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1101 09:27:45.892648 349088 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1101 09:27:45.892677 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:45.892723 349088 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1101 09:27:45.892745 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:45.910111 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.618743413s)
I1101 09:27:45.910140 349088 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.272199257s)
I1101 09:27:45.910173 349088 api_server.go:72] duration metric: took 10.578008261s to wait for apiserver process to appear ...
I1101 09:27:45.910181 349088 api_server.go:88] waiting for apiserver healthz status ...
I1101 09:27:45.910207 349088 api_server.go:253] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
W1101 09:27:45.910196 349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1101 09:27:45.910344 349088 retry.go:31] will retry after 301.381616ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1101 09:27:45.930071 349088 api_server.go:279] https://192.168.39.81:8443/healthz returned 200:
ok
I1101 09:27:45.942691 349088 api_server.go:141] control plane version: v1.34.1
I1101 09:27:45.942722 349088 api_server.go:131] duration metric: took 32.53467ms to wait for apiserver health ...
I1101 09:27:45.942732 349088 system_pods.go:43] waiting for kube-system pods to appear ...
I1101 09:27:46.016731 349088 system_pods.go:59] 16 kube-system pods found
I1101 09:27:46.016782 349088 system_pods.go:61] "amd-gpu-device-plugin-5pdrl" [b8e4e785-d8f6-4d48-8364-9ae272d16ed4] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1101 09:27:46.016793 349088 system_pods.go:61] "coredns-66bc5c9577-87j4r" [cf4e582b-3f40-44c4-afae-bfbf0a9399a9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1101 09:27:46.016801 349088 system_pods.go:61] "coredns-66bc5c9577-gbqkt" [5e62dfed-a46f-4e51-a84d-07825fc7bc70] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1101 09:27:46.016809 349088 system_pods.go:61] "etcd-addons-610936" [7f2ac281-5593-4be9-b542-b326a101d645] Running
I1101 09:27:46.016815 349088 system_pods.go:61] "kube-apiserver-addons-610936" [607f02cf-1d16-4146-a5c2-a31b94c00d75] Running
I1101 09:27:46.016819 349088 system_pods.go:61] "kube-controller-manager-addons-610936" [e1d392bf-c7a3-456a-a1df-9e5e4f598dde] Running
I1101 09:27:46.016825 349088 system_pods.go:61] "kube-ingress-dns-minikube" [2b8eca17-1e14-4918-b8d0-991e96bd3770] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1101 09:27:46.016828 349088 system_pods.go:61] "kube-proxy-wm94c" [e0d02112-bf3c-4352-a3de-02ca7e44f294] Running
I1101 09:27:46.016832 349088 system_pods.go:61] "kube-scheduler-addons-610936" [37a732ca-6715-40d4-b050-425213eb3eac] Running
I1101 09:27:46.016837 349088 system_pods.go:61] "metrics-server-85b7d694d7-br7l2" [04c85380-ef98-4ac1-bf3a-5609222c5b88] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1101 09:27:46.016844 349088 system_pods.go:61] "nvidia-device-plugin-daemonset-668jz" [8afeb20e-4679-4c6a-b8aa-615540852043] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1101 09:27:46.016852 349088 system_pods.go:61] "registry-6b586f9694-zk6f9" [8ca1aaec-2bd9-4d71-8886-79afedd32769] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1101 09:27:46.016857 349088 system_pods.go:61] "registry-creds-764b6fb674-nz5gr" [824af6be-aaa9-462e-afe0-7c82d519ffe4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1101 09:27:46.016873 349088 system_pods.go:61] "registry-proxy-p6swb" [bb847846-a739-4165-9043-1a8601f04bd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1101 09:27:46.016879 349088 system_pods.go:61] "snapshot-controller-7d9fbc56b8-g6nhw" [9955c1b1-f135-48db-be87-91fdaaa7c2f0] Pending
I1101 09:27:46.016889 349088 system_pods.go:61] "storage-provisioner" [bbbac5d1-8445-4301-918f-9e1633b097d2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1101 09:27:46.016899 349088 system_pods.go:74] duration metric: took 74.158799ms to wait for pod list to return data ...
I1101 09:27:46.016915 349088 default_sa.go:34] waiting for default service account to be created ...
I1101 09:27:46.070857 349088 default_sa.go:45] found service account: "default"
I1101 09:27:46.070905 349088 default_sa.go:55] duration metric: took 53.980293ms for default service account to be created ...
I1101 09:27:46.070920 349088 system_pods.go:116] waiting for k8s-apps to be running ...
I1101 09:27:46.113006 349088 system_pods.go:86] 17 kube-system pods found
I1101 09:27:46.113053 349088 system_pods.go:89] "amd-gpu-device-plugin-5pdrl" [b8e4e785-d8f6-4d48-8364-9ae272d16ed4] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1101 09:27:46.113063 349088 system_pods.go:89] "coredns-66bc5c9577-87j4r" [cf4e582b-3f40-44c4-afae-bfbf0a9399a9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1101 09:27:46.113076 349088 system_pods.go:89] "coredns-66bc5c9577-gbqkt" [5e62dfed-a46f-4e51-a84d-07825fc7bc70] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1101 09:27:46.113082 349088 system_pods.go:89] "etcd-addons-610936" [7f2ac281-5593-4be9-b542-b326a101d645] Running
I1101 09:27:46.113087 349088 system_pods.go:89] "kube-apiserver-addons-610936" [607f02cf-1d16-4146-a5c2-a31b94c00d75] Running
I1101 09:27:46.113092 349088 system_pods.go:89] "kube-controller-manager-addons-610936" [e1d392bf-c7a3-456a-a1df-9e5e4f598dde] Running
I1101 09:27:46.113118 349088 system_pods.go:89] "kube-ingress-dns-minikube" [2b8eca17-1e14-4918-b8d0-991e96bd3770] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1101 09:27:46.113130 349088 system_pods.go:89] "kube-proxy-wm94c" [e0d02112-bf3c-4352-a3de-02ca7e44f294] Running
I1101 09:27:46.113137 349088 system_pods.go:89] "kube-scheduler-addons-610936" [37a732ca-6715-40d4-b050-425213eb3eac] Running
I1101 09:27:46.113148 349088 system_pods.go:89] "metrics-server-85b7d694d7-br7l2" [04c85380-ef98-4ac1-bf3a-5609222c5b88] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1101 09:27:46.113156 349088 system_pods.go:89] "nvidia-device-plugin-daemonset-668jz" [8afeb20e-4679-4c6a-b8aa-615540852043] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1101 09:27:46.113168 349088 system_pods.go:89] "registry-6b586f9694-zk6f9" [8ca1aaec-2bd9-4d71-8886-79afedd32769] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1101 09:27:46.113175 349088 system_pods.go:89] "registry-creds-764b6fb674-nz5gr" [824af6be-aaa9-462e-afe0-7c82d519ffe4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1101 09:27:46.113184 349088 system_pods.go:89] "registry-proxy-p6swb" [bb847846-a739-4165-9043-1a8601f04bd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1101 09:27:46.113189 349088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d6tfl" [e71257a8-811d-449a-8d78-9fb66dbb5379] Pending
I1101 09:27:46.113200 349088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g6nhw" [9955c1b1-f135-48db-be87-91fdaaa7c2f0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1101 09:27:46.113206 349088 system_pods.go:89] "storage-provisioner" [bbbac5d1-8445-4301-918f-9e1633b097d2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1101 09:27:46.113218 349088 system_pods.go:126] duration metric: took 42.28983ms to wait for k8s-apps to be running ...
I1101 09:27:46.113233 349088 system_svc.go:44] waiting for kubelet service to be running ....
I1101 09:27:46.113295 349088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1101 09:27:46.185838 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:27:46.212282 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1101 09:27:46.374505 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:46.376420 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:46.884887 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:46.887176 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:47.457309 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:47.468305 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:47.580480 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.727982175s)
I1101 09:27:47.580531 349088 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-610936"
I1101 09:27:47.580591 349088 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.719752323s)
I1101 09:27:47.580632 349088 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.4673145s)
I1101 09:27:47.580721 349088 system_svc.go:56] duration metric: took 1.467472013s WaitForService to wait for kubelet
I1101 09:27:47.580743 349088 kubeadm.go:587] duration metric: took 12.248577708s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1101 09:27:47.580770 349088 node_conditions.go:102] verifying NodePressure condition ...
I1101 09:27:47.582183 349088 out.go:179] * Verifying csi-hostpath-driver addon...
I1101 09:27:47.582186 349088 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
I1101 09:27:47.584133 349088 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1101 09:27:47.584735 349088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1101 09:27:47.585467 349088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1101 09:27:47.585491 349088 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1101 09:27:47.663562 349088 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1101 09:27:47.663591 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:47.666508 349088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1101 09:27:47.666540 349088 node_conditions.go:123] node cpu capacity is 2
I1101 09:27:47.666560 349088 node_conditions.go:105] duration metric: took 85.78252ms to run NodePressure ...
I1101 09:27:47.666576 349088 start.go:242] waiting for startup goroutines ...
I1101 09:27:47.881842 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:47.882842 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:47.886528 349088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1101 09:27:47.886556 349088 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1101 09:27:48.090205 349088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1101 09:27:48.090230 349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1101 09:27:48.091584 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:48.238218 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1101 09:27:48.372289 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:48.374702 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:48.590458 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:48.872682 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:48.875219 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:49.093604 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:49.374826 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:49.375040 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:49.594000 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:49.766509 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.58059868s)
W1101 09:27:49.766571 349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:27:49.766602 349088 retry.go:31] will retry after 498.527289ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:27:49.766610 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.554281701s)
I1101 09:27:49.887598 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:49.888126 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:50.098476 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:50.243456 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.005192092s)
I1101 09:27:50.244608 349088 addons.go:480] Verifying addon gcp-auth=true in "addons-610936"
I1101 09:27:50.246395 349088 out.go:179] * Verifying gcp-auth addon...
I1101 09:27:50.248414 349088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1101 09:27:50.266019 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:27:50.289275 349088 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1101 09:27:50.289302 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:27:50.387406 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:50.388029 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:50.601719 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:50.752645 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:27:50.868974 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:50.874811 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:51.090819 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:51.255421 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:27:51.372275 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:51.373788 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:51.589814 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:51.755758 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:27:51.875829 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:51.876484 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:52.091604 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:52.125154 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.859078037s)
W1101 09:27:52.125214 349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:27:52.125243 349088 retry.go:31] will retry after 387.959811ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:27:52.254807 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:27:52.370968 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:52.373483 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:52.513666 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:27:52.593530 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:52.754905 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:27:52.871758 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:52.873590 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:53.090947 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:53.253461 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:27:53.375740 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:53.377742 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:53.594336 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:53.753713 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:27:53.870968 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.357252791s)
W1101 09:27:53.871039 349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:27:53.871068 349088 retry.go:31] will retry after 850.837671ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:27:53.887158 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:53.888402 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:54.092053 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:54.255688 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:27:54.371469 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:54.372537 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:54.591180 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:54.722435 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:27:54.755187 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:27:54.868852 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:54.877857 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:55.091104 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:55.254917 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:27:55.370838 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:55.372955 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:55.593420 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:55.756621 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:27:55.843886 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.121372319s)
W1101 09:27:55.843975 349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:27:55.844006 349088 retry.go:31] will retry after 934.689197ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:27:55.867479 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:55.869106 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:56.090755 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:56.251968 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:27:56.367783 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:56.369239 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:56.589914 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:56.754432 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:27:56.779656 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:27:56.869071 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:56.871332 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:57.091469 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:57.255688 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:27:57.367149 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:57.372458 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:57.589425 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:57.754261 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
W1101 09:27:57.763628 349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:27:57.763662 349088 retry.go:31] will retry after 1.073735115s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:27:57.866539 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:57.869215 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:58.091779 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:58.253148 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:27:58.368730 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:58.370466 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:58.589950 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:58.754656 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:27:58.837614 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:27:58.870757 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:58.872826 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:59.096397 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:59.255694 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:27:59.369690 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:27:59.375404 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:59.590707 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:27:59.755775 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:27:59.873541 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:27:59.877816 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:00.094827 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:00.150676 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.313002265s)
W1101 09:28:00.150741 349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:28:00.150775 349088 retry.go:31] will retry after 2.397028196s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:28:00.255148 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:00.368283 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:00.375062 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:00.588892 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:00.753094 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:00.872702 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:00.872919 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:01.089828 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:01.251883 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:01.367197 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:01.368944 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:01.590146 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:01.755612 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:01.870183 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:01.872692 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:02.090361 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:02.254623 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:02.367577 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:02.368778 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:02.548981 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:28:02.589611 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:02.752581 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:02.867818 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:02.870083 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:03.089513 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:03.253757 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:03.371426 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:03.375114 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:03.592484 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:03.753746 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:03.766750 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.217712585s)
W1101 09:28:03.766808 349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:28:03.766839 349088 retry.go:31] will retry after 4.826998891s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:28:03.876688 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:03.878536 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:04.092507 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:04.255449 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:04.371832 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:04.376973 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:04.590170 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:04.753896 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:04.872968 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:04.874737 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:05.099436 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:05.253777 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:05.374511 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:05.375604 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:05.591029 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:05.756054 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:05.870902 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:05.872382 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:06.582219 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:06.582755 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:06.584323 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:06.584428 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:06.589648 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:06.755643 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:06.872236 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:06.872294 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:07.092463 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:07.255721 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:07.373419 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:07.374078 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:07.590279 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:07.752724 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:07.868230 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:07.870830 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:08.093256 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:08.253779 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:08.365989 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:08.367968 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:08.589422 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:08.594584 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:28:08.751538 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:08.972714 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:08.975758 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:09.090562 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:09.253467 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:09.369352 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:09.369386 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:09.592492 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:09.754674 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:09.795361 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.2007288s)
W1101 09:28:09.795502 349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:28:09.795533 349088 retry.go:31] will retry after 3.483295677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:28:09.889588 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:09.889754 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:10.091064 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:10.252759 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:10.369205 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:10.370615 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:10.591931 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:10.755265 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:10.869266 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:10.873140 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:11.090724 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:11.258190 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:11.842575 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:11.842638 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:11.843463 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:11.844289 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:11.866936 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:11.868477 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:12.094180 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:12.257779 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:12.369335 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:12.370262 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:12.590699 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:12.753997 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:12.873239 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:12.873838 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:13.096391 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:13.262598 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:13.279807 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:28:13.375199 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:13.375305 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:13.594719 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:13.756018 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:13.870499 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:13.874629 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:14.092422 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:14.256836 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:14.368688 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:14.372538 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:14.591170 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:14.626197 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.346342043s)
W1101 09:28:14.626257 349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:28:14.626285 349088 retry.go:31] will retry after 11.238635582s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:28:14.756318 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:14.871756 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:14.872388 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:15.091417 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:15.253067 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:15.369356 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:15.369487 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:15.589679 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:15.751851 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:15.870300 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:15.872929 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:16.093454 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:16.256020 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:16.368811 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:16.377654 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:16.590726 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:16.751971 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:16.866737 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:16.868073 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:17.091111 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:17.253191 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:17.380787 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:17.382875 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:17.694211 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:17.753443 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:17.867891 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:17.869155 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:18.097913 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:18.256274 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:18.367711 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1101 09:28:18.367995 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:18.590121 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:18.755701 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:18.868358 349088 kapi.go:107] duration metric: took 33.004649349s to wait for kubernetes.io/minikube-addons=registry ...
I1101 09:28:18.869253 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:19.092998 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:19.254996 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:19.370208 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:19.590250 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:19.755976 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:19.869582 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:20.090462 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:20.254219 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:20.367836 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:20.590144 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:20.764454 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:20.872908 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:21.099159 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:21.257438 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:21.369740 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:21.592467 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:21.755458 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:21.878038 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:22.102035 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:22.261300 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:22.375543 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:22.599436 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:22.761904 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:22.868483 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:23.091712 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:23.252322 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:23.367921 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:23.595306 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:23.753042 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:23.866820 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:24.089256 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:24.253336 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:24.368497 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:24.589260 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:24.753526 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:24.867561 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:25.094722 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:25.253215 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:25.372605 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:25.591240 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:25.755219 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:25.865684 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:28:25.866978 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:26.092927 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:26.254479 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:26.415023 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:26.594693 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:26.757331 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:26.869279 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:26.939561 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.073824208s)
W1101 09:28:26.939622 349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:28:26.939646 349088 retry.go:31] will retry after 12.516279473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:28:27.090209 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:27.252848 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:27.367599 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:27.591575 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:27.753085 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:27.886082 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:28.091552 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:28.256058 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:28.371186 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:28.918878 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:28.919061 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:28.920489 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:29.089665 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:29.252331 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:29.371392 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:29.589081 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:29.751776 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:29.868841 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:30.094135 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:30.253244 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:30.371621 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:30.593244 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:30.755576 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:30.868192 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:31.094229 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:31.253208 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:31.370372 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:31.589601 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:31.752212 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:31.868533 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:32.090517 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:32.251653 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:32.368430 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:32.596828 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:32.759991 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:32.866777 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:33.089980 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:33.257570 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:33.368813 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:33.595521 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:33.753194 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:33.874801 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:34.092441 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:34.254856 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:34.370155 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:34.600736 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:34.760531 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:34.868188 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:35.220929 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:35.258446 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:35.368460 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:35.592608 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:35.755534 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:35.875635 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:36.095721 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:36.256570 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:36.368182 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:36.590310 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:36.754476 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:36.871777 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:37.090067 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:37.253914 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:37.369640 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:37.590648 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:37.752447 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:37.870442 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:38.089769 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:38.253210 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:38.368371 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:38.599316 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:38.754287 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:38.872381 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:39.100825 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:39.254338 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:39.374120 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:39.456188 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:28:39.593214 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:39.754141 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:39.878222 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:40.089519 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:40.256163 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:40.369367 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:40.590695 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:40.756104 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:40.797907 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.34166729s)
W1101 09:28:40.797954 349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:28:40.797981 349088 retry.go:31] will retry after 15.246599985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:28:40.876729 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:41.093848 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:41.253770 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:41.369521 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:41.590187 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:41.754175 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:41.870656 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:42.094068 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:42.252584 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:42.368419 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:42.599797 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:42.761338 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:42.882315 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:43.093394 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:43.254577 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:43.367716 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:43.594791 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:43.758565 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:43.870242 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:44.091571 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:44.253654 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:44.371270 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:44.594491 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:44.766584 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:44.871242 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:45.092157 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:45.256141 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:45.373960 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:45.592980 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:45.753566 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:45.867354 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:46.092779 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:46.252268 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:46.372495 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:46.603612 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:47.016509 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:47.018581 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:47.095070 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:47.254032 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:47.366854 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:47.590717 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:47.754654 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:47.866530 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:48.091740 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:48.253582 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:48.368335 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:48.589009 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:48.757155 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:48.867766 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:49.091014 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:49.257198 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:49.370681 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:49.591212 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:49.753115 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:49.870433 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:50.093323 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:50.256487 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:50.369852 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:50.590447 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:50.753200 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:50.871272 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:51.098706 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:51.253373 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:51.370210 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:51.590520 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1101 09:28:51.769214 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:51.872303 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:52.095093 349088 kapi.go:107] duration metric: took 1m4.510352876s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1101 09:28:52.256098 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:52.367219 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:52.756656 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:52.867404 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:53.274233 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:53.372673 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:53.755711 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:53.868164 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:54.256268 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:54.366966 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:54.753542 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:54.868746 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:55.254072 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:55.371741 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:55.753280 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:55.873961 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:56.044993 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1101 09:28:56.264356 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:56.368776 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:56.756276 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:56.872235 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:57.253668 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:57.291524 349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.246490325s)
W1101 09:28:57.291567 349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:28:57.291590 349088 retry.go:31] will retry after 30.489031451s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1101 09:28:57.379630 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:57.752917 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:57.869947 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:58.254785 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:58.366676 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:58.753752 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:58.867598 349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1101 09:28:59.262720 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:28:59.370933 349088 kapi.go:107] duration metric: took 1m13.508272969s to wait for app.kubernetes.io/name=ingress-nginx ...
I1101 09:28:59.753819 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:29:00.255336 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:29:00.756647 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:29:01.258605 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:29:01.752899 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:29:02.253665 349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1101 09:29:02.753651 349088 kapi.go:107] duration metric: took 1m12.505231126s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1101 09:29:02.755517 349088 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-610936 cluster.
I1101 09:29:02.757225 349088 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1101 09:29:02.758516 349088 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1101 09:29:27.782618 349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
W1101 09:29:28.510880 349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
W1101 09:29:28.511027 349088 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
]
I1101 09:29:28.512682 349088 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, default-storageclass, storage-provisioner, ingress-dns, nvidia-device-plugin, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I1101 09:29:28.513693 349088 addons.go:515] duration metric: took 1m53.181636608s for enable addons: enabled=[registry-creds amd-gpu-device-plugin default-storageclass storage-provisioner ingress-dns nvidia-device-plugin cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I1101 09:29:28.513752 349088 start.go:247] waiting for cluster config update ...
I1101 09:29:28.513779 349088 start.go:256] writing updated cluster config ...
I1101 09:29:28.514139 349088 ssh_runner.go:195] Run: rm -f paused
I1101 09:29:28.521080 349088 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1101 09:29:28.524935 349088 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gbqkt" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:29:28.531625 349088 pod_ready.go:94] pod "coredns-66bc5c9577-gbqkt" is "Ready"
I1101 09:29:28.531658 349088 pod_ready.go:86] duration metric: took 6.69521ms for pod "coredns-66bc5c9577-gbqkt" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:29:28.534259 349088 pod_ready.go:83] waiting for pod "etcd-addons-610936" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:29:28.539687 349088 pod_ready.go:94] pod "etcd-addons-610936" is "Ready"
I1101 09:29:28.539712 349088 pod_ready.go:86] duration metric: took 5.42952ms for pod "etcd-addons-610936" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:29:28.542400 349088 pod_ready.go:83] waiting for pod "kube-apiserver-addons-610936" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:29:28.547752 349088 pod_ready.go:94] pod "kube-apiserver-addons-610936" is "Ready"
I1101 09:29:28.547787 349088 pod_ready.go:86] duration metric: took 5.363902ms for pod "kube-apiserver-addons-610936" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:29:28.550457 349088 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-610936" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:29:28.928116 349088 pod_ready.go:94] pod "kube-controller-manager-addons-610936" is "Ready"
I1101 09:29:28.928150 349088 pod_ready.go:86] duration metric: took 377.66462ms for pod "kube-controller-manager-addons-610936" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:29:29.126544 349088 pod_ready.go:83] waiting for pod "kube-proxy-wm94c" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:29:29.525560 349088 pod_ready.go:94] pod "kube-proxy-wm94c" is "Ready"
I1101 09:29:29.525595 349088 pod_ready.go:86] duration metric: took 399.020613ms for pod "kube-proxy-wm94c" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:29:29.726321 349088 pod_ready.go:83] waiting for pod "kube-scheduler-addons-610936" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:29:30.124793 349088 pod_ready.go:94] pod "kube-scheduler-addons-610936" is "Ready"
I1101 09:29:30.124825 349088 pod_ready.go:86] duration metric: took 398.475664ms for pod "kube-scheduler-addons-610936" in "kube-system" namespace to be "Ready" or be gone ...
I1101 09:29:30.124837 349088 pod_ready.go:40] duration metric: took 1.603719867s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1101 09:29:30.173484 349088 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
I1101 09:29:30.175425 349088 out.go:179] * Done! kubectl is now configured to use "addons-610936" cluster and "default" namespace by default
==> CRI-O <==
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.341631459Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:fafb3f50759fb3ca608566a3f99c714cd2c84822225a83a2784a9703746c5e3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1761989244899461357,StartedAt:1761989245042825453,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.34.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1fd67ea0ce7135fb26c7c0d9556b6b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/ab1fd67ea0ce7135fb26c7c0d9556b6b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/ab1fd67ea0ce7135fb26c7c0d9556b6b/containers/kube-scheduler/1aac6d6a,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-addons-610936_ab1fd67ea
0ce7135fb26c7c0d9556b6b/kube-scheduler/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=4190db1c-9103-4a90-9937-d3e6ca317889 name=/runtime.v1.RuntimeService/ContainerStatus
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.342498683Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:8034149d00598833129fa576f1e2fc17f25643b0868c221ee401136b08eb574f,Verbose:false,}" file="otel-collector/interceptors.go:62" id=cf6957a3-49c2-4a47-8189-4c206d7e8a72 name=/runtime.v1.RuntimeService/ContainerStatus
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.342598777Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:8034149d00598833129fa576f1e2fc17f25643b0868c221ee401136b08eb574f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1761989244844233021,StartedAt:1761989244978188634,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.34.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6f27c5410154d627073969293976eea,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\"
:\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b6f27c5410154d627073969293976eea/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b6f27c5410154d627073969293976eea/containers/kube-apiserver/466587fd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRel
abel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-addons-610936_b6f27c5410154d627073969293976eea/kube-apiserver/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=cf6957a3-49c2-4a47-8189-4c206d7e8a72 name=/runtime.v1.RuntimeService/ContainerStatus
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.343320485Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:2273a9881f45e98bd51b08079c70bba61edb15367d96d4fc307a139e6efdecc0,Verbose:false,}" file="otel-collector/interceptors.go:62" id=8c1edf82-0c2c-49a0-880e-741422a0414d name=/runtime.v1.RuntimeService/ContainerStatus
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.343607204Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:2273a9881f45e98bd51b08079c70bba61edb15367d96d4fc307a139e6efdecc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1761989244801927742,StartedAt:1761989244909071832,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.34.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2ba9b2dfdbddd8abd459262fdb458f0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":1025
7,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/d2ba9b2dfdbddd8abd459262fdb458f0/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/d2ba9b2dfdbddd8abd459262fdb458f0/containers/kube-controller-manager/bcf83e52,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,
HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-addons-610936_d2ba9b2dfdbddd8abd459262fdb458f0/kube-controller-manager/0.log,Resources:&ContainerResources{Linux:&LinuxContain
erResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=8c1edf82-0c2c-49a0-880e-741422a0414d name=/runtime.v1.RuntimeService/ContainerStatus
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.344702441Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:ab4b8cec913e5fbce9ab209d6099961cc18d592862676af287d1934a1852153c,Verbose:false,}" file="otel-collector/interceptors.go:62" id=b9adc70d-5b94-4eaa-81e9-50f167f50344 name=/runtime.v1.RuntimeService/ContainerStatus
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.344821140Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:ab4b8cec913e5fbce9ab209d6099961cc18d592862676af287d1934a1852153c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1761989244795687726,StartedAt:1761989244984625690,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.6.4-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8fcdd29f8edd4ee30ea406f31d39174,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/c8fcdd29f8edd4ee30ea406f31d39174/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/c8fcdd29f8edd4ee30ea406f31d39174/containers/etcd/9544f3a2,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPA
GATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-addons-610936_c8fcdd29f8edd4ee30ea406f31d39174/etcd/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=b9adc70d-5b94-4eaa-81e9-50f167f50344 name=/runtime.v1.RuntimeService/ContainerStatus
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.364102955Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=522c58fe-6129-408c-9036-9eade60f4930 name=/runtime.v1.RuntimeService/Version
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.364480129Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=522c58fe-6129-408c-9036-9eade60f4930 name=/runtime.v1.RuntimeService/Version
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.366811036Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b891a3d-20c0-4a46-8ad7-070e7488708f name=/runtime.v1.ImageService/ImageFsInfo
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.368427323Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761989551368394216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588624,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b891a3d-20c0-4a46-8ad7-070e7488708f name=/runtime.v1.ImageService/ImageFsInfo
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.369158439Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92f0bdd1-0b08-47cc-88a1-f6bd43871662 name=/runtime.v1.RuntimeService/ListContainers
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.369218489Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92f0bdd1-0b08-47cc-88a1-f6bd43871662 name=/runtime.v1.RuntimeService/ListContainers
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.369522338Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84b539c862293409e19596e3159d20acac7e0848a2026baa59b6de3a47e64c6b,PodSandboxId:b1b1f64c123d730eab4a2c71844ebfa30cd6160c6c81172e7fc76f3c3b8bf320,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1761989409682555130,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2369d8f-b848-4d1a-9e8f-e2845ef60291,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd19ad29558e96fd24c9acaa5cd8adb9b3aee6290ecf781a39f59c0546f61318,PodSandboxId:d07d1e33d07942a63b098ca9592423628b138ef0e60f74304224f6d6deda6887,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761989372567427309,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85f633d6-3539-4443-8d47-46b81caf92be,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e517093db9d4a4b15378a2d18d839a5570e4d1e23236ad4a1cad03529a0236,PodSandboxId:d9d3d351f5293fa83d94acf37673dbef548b72024f30eac7a887ced0d73f9fc1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761989338922346343,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-kdk56,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d67bbf1f-1e3d-46ae-a872-e71b55056019,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:96ccf439e73c9a5761b8bc1cf8d005b3c28c9ab5a2d04dd717a6827c098973da,PodSandboxId:675df002ba5dc82e9984a12e0b1c3647715f7dccd3b5596cb47ca9521fb35ff3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761989322460937835,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-k7flz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ceec19dd-43fa-46ed-9829-b10278e5cf2c,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51396af81c8b50de5cf9ae9400bbce46d91ea910302ef89e6b2de273b3b70e4d,PodSandboxId:c8df62cf4ee2cf0a91f01a6e9ca4b5deb939aa08dc05505b672a50d867ae4a8b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761989316886731743,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-v2tvv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f0b455ad-6d04-4986-ab33-3edfd0fb7ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:302c7cc67db1757a7b34fe684098b4d3b00a0122c6396a50c0d5451ede4a5f09,PodSandboxId:23ca43c714536d040e2c7270181497a32d858a68082d6a9c9f3c375bdaec718b,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761989311374810116,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-8zz4q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: b2c07026-df4c-4c8f-a77d-b41864429b49,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be11992f41573c1585e4fa6469d8beb98321d90e3b99f00c3974200e17670788,PodSandboxId:d3e41f66ab23a50f0ce3ee5b382f3c94e31988f7f44a45fb36bb4bd43fb952ee,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761989291977140257,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b8eca17-1e14-4918-b8d0-991e96bd3770,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cf8a7bf82657d6a2878db78f5553c967593ec366d9c0aa44e3e1f5c71847f6e,PodSandboxId:93cec0513adbfdddcc64cde2839d7fcd15a73d49b0473cc
30474649cd7978e8f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761989268682328641,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5pdrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8e4e785-d8f6-4d48-8364-9ae272d16ed4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5af9f4f51dd4ff474fd24e5769516432460f5a719cc8ada9f3335798427616bd,PodSandboxId:a58fe82
79d7b50de30d47e7fc96602966eb296f1b93c09afd38283fc73ab0b45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761989265539546288,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbbac5d1-8445-4301-918f-9e1633b097d2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcbd5e889ac7c859909ef3521fa201ecdefad68531bc297090e5628fd14802f4,PodSandboxId:89373f63c3094b12749
ee8e00e27d7a139a9b22658990633634e875782019999,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761989257579234210,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gbqkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e62dfed-a46f-4e51-a84d-07825fc7bc70,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b35b53f7950068a92486e0920f2ff6340f6e3caa74dd0d95fbb470ac779d65b6,PodSandboxId:ab59bd977914c3fb69cbd78e09d487830dd66cb5d08696ee521eafc2bcd2d562,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761989256709476488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wm94c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d02112-bf3c-4352-a3de-02ca7e44f294,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fafb3f50759fb3ca608566a3f99c714cd2c84822225a83a2784a9703746c5e3f,PodSandboxId:07a69256ce35305d0596b690f1e23533fa1d1a20253f88633386f21b1a477eb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761989244771640063,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1fd67ea0ce7135fb26c7c0d9556b6b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports:
[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8034149d00598833129fa576f1e2fc17f25643b0868c221ee401136b08eb574f,PodSandboxId:d772845cf8c044bb6fbbe42fec2170073a28b9e7ea660adbe1c218ca02be40e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761989244722454011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6f27c5410154d627073969293976
eea,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2273a9881f45e98bd51b08079c70bba61edb15367d96d4fc307a139e6efdecc0,PodSandboxId:b5db80dbb4af6a683fee765bab3c67062f47fc1279d8004d661c494c09847944,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761989244705683901,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name:
kube-controller-manager-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2ba9b2dfdbddd8abd459262fdb458f0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab4b8cec913e5fbce9ab209d6099961cc18d592862676af287d1934a1852153c,PodSandboxId:a4fe65821e195dbbe96f5d3581049567288b70850a9a3d863af15700b1875598,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:176198924
4710560354,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8fcdd29f8edd4ee30ea406f31d39174,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92f0bdd1-0b08-47cc-88a1-f6bd43871662 name=/runtime.v1.RuntimeService/ListContainers
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.399197425Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.401486544Z" level=debug msg="Using SQLite blob info cache at /var/lib/containers/cache/blob-info-cache-v1.sqlite" file="blobinfocache/default.go:74"
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.401660288Z" level=debug msg="Source is a manifest list; copying (only) instance sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 for current system" file="copy/copy.go:318"
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.401771388Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.417165200Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=566aa186-9466-4ff5-a50e-38908c664969 name=/runtime.v1.RuntimeService/Version
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.417375924Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=566aa186-9466-4ff5-a50e-38908c664969 name=/runtime.v1.RuntimeService/Version
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.418998951Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=85d88747-e8d7-4416-8d61-2f4ff8d5beb8 name=/runtime.v1.ImageService/ImageFsInfo
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.420304332Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761989551420274411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588624,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=85d88747-e8d7-4416-8d61-2f4ff8d5beb8 name=/runtime.v1.ImageService/ImageFsInfo
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.421313484Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51c293bd-8e7b-45c4-9bc8-18241784fd5e name=/runtime.v1.RuntimeService/ListContainers
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.421404318Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51c293bd-8e7b-45c4-9bc8-18241784fd5e name=/runtime.v1.RuntimeService/ListContainers
Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.421780623Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84b539c862293409e19596e3159d20acac7e0848a2026baa59b6de3a47e64c6b,PodSandboxId:b1b1f64c123d730eab4a2c71844ebfa30cd6160c6c81172e7fc76f3c3b8bf320,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1761989409682555130,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2369d8f-b848-4d1a-9e8f-e2845ef60291,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd19ad29558e96fd24c9acaa5cd8adb9b3aee6290ecf781a39f59c0546f61318,PodSandboxId:d07d1e33d07942a63b098ca9592423628b138ef0e60f74304224f6d6deda6887,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761989372567427309,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85f633d6-3539-4443-8d47-46b81caf92be,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e517093db9d4a4b15378a2d18d839a5570e4d1e23236ad4a1cad03529a0236,PodSandboxId:d9d3d351f5293fa83d94acf37673dbef548b72024f30eac7a887ced0d73f9fc1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761989338922346343,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-kdk56,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d67bbf1f-1e3d-46ae-a872-e71b55056019,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:96ccf439e73c9a5761b8bc1cf8d005b3c28c9ab5a2d04dd717a6827c098973da,PodSandboxId:675df002ba5dc82e9984a12e0b1c3647715f7dccd3b5596cb47ca9521fb35ff3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761989322460937835,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-k7flz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ceec19dd-43fa-46ed-9829-b10278e5cf2c,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51396af81c8b50de5cf9ae9400bbce46d91ea910302ef89e6b2de273b3b70e4d,PodSandboxId:c8df62cf4ee2cf0a91f01a6e9ca4b5deb939aa08dc05505b672a50d867ae4a8b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761989316886731743,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-v2tvv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f0b455ad-6d04-4986-ab33-3edfd0fb7ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:302c7cc67db1757a7b34fe684098b4d3b00a0122c6396a50c0d5451ede4a5f09,PodSandboxId:23ca43c714536d040e2c7270181497a32d858a68082d6a9c9f3c375bdaec718b,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761989311374810116,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-8zz4q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: b2c07026-df4c-4c8f-a77d-b41864429b49,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be11992f41573c1585e4fa6469d8beb98321d90e3b99f00c3974200e17670788,PodSandboxId:d3e41f66ab23a50f0ce3ee5b382f3c94e31988f7f44a45fb36bb4bd43fb952ee,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761989291977140257,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b8eca17-1e14-4918-b8d0-991e96bd3770,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cf8a7bf82657d6a2878db78f5553c967593ec366d9c0aa44e3e1f5c71847f6e,PodSandboxId:93cec0513adbfdddcc64cde2839d7fcd15a73d49b0473cc
30474649cd7978e8f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761989268682328641,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5pdrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8e4e785-d8f6-4d48-8364-9ae272d16ed4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5af9f4f51dd4ff474fd24e5769516432460f5a719cc8ada9f3335798427616bd,PodSandboxId:a58fe82
79d7b50de30d47e7fc96602966eb296f1b93c09afd38283fc73ab0b45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761989265539546288,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbbac5d1-8445-4301-918f-9e1633b097d2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcbd5e889ac7c859909ef3521fa201ecdefad68531bc297090e5628fd14802f4,PodSandboxId:89373f63c3094b12749
ee8e00e27d7a139a9b22658990633634e875782019999,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761989257579234210,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gbqkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e62dfed-a46f-4e51-a84d-07825fc7bc70,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b35b53f7950068a92486e0920f2ff6340f6e3caa74dd0d95fbb470ac779d65b6,PodSandboxId:ab59bd977914c3fb69cbd78e09d487830dd66cb5d08696ee521eafc2bcd2d562,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761989256709476488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wm94c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d02112-bf3c-4352-a3de-02ca7e44f294,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fafb3f50759fb3ca608566a3f99c714cd2c84822225a83a2784a9703746c5e3f,PodSandboxId:07a69256ce35305d0596b690f1e23533fa1d1a20253f88633386f21b1a477eb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761989244771640063,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1fd67ea0ce7135fb26c7c0d9556b6b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports:
[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8034149d00598833129fa576f1e2fc17f25643b0868c221ee401136b08eb574f,PodSandboxId:d772845cf8c044bb6fbbe42fec2170073a28b9e7ea660adbe1c218ca02be40e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761989244722454011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6f27c5410154d627073969293976
eea,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2273a9881f45e98bd51b08079c70bba61edb15367d96d4fc307a139e6efdecc0,PodSandboxId:b5db80dbb4af6a683fee765bab3c67062f47fc1279d8004d661c494c09847944,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761989244705683901,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name:
kube-controller-manager-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2ba9b2dfdbddd8abd459262fdb458f0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab4b8cec913e5fbce9ab209d6099961cc18d592862676af287d1934a1852153c,PodSandboxId:a4fe65821e195dbbe96f5d3581049567288b70850a9a3d863af15700b1875598,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:176198924
4710560354,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8fcdd29f8edd4ee30ea406f31d39174,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51c293bd-8e7b-45c4-9bc8-18241784fd5e name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
84b539c862293 docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 2 minutes ago Running nginx 0 b1b1f64c123d7 nginx
cd19ad29558e9 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 2 minutes ago Running busybox 0 d07d1e33d0794 busybox
24e517093db9d registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd 3 minutes ago Running controller 0 d9d3d351f5293 ingress-nginx-controller-675c5ddd98-kdk56
96ccf439e73c9 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39 3 minutes ago Exited patch 0 675df002ba5dc ingress-nginx-admission-patch-k7flz
51396af81c8b5 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39 3 minutes ago Exited create 0 c8df62cf4ee2c ingress-nginx-admission-create-v2tvv
302c7cc67db17 ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb 4 minutes ago Running gadget 0 23ca43c714536 gadget-8zz4q
be11992f41573 docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 4 minutes ago Running minikube-ingress-dns 0 d3e41f66ab23a kube-ingress-dns-minikube
9cf8a7bf82657 docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 93cec0513adbf amd-gpu-device-plugin-5pdrl
5af9f4f51dd4f 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 a58fe8279d7b5 storage-provisioner
fcbd5e889ac7c 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 89373f63c3094 coredns-66bc5c9577-gbqkt
b35b53f795006 fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7 4 minutes ago Running kube-proxy 0 ab59bd977914c kube-proxy-wm94c
fafb3f50759fb 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813 5 minutes ago Running kube-scheduler 0 07a69256ce353 kube-scheduler-addons-610936
8034149d00598 c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97 5 minutes ago Running kube-apiserver 0 d772845cf8c04 kube-apiserver-addons-610936
ab4b8cec913e5 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115 5 minutes ago Running etcd 0 a4fe65821e195 etcd-addons-610936
2273a9881f45e c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f 5 minutes ago Running kube-controller-manager 0 b5db80dbb4af6 kube-controller-manager-addons-610936
==> coredns [fcbd5e889ac7c859909ef3521fa201ecdefad68531bc297090e5628fd14802f4] <==
[INFO] 10.244.0.8:58264 - 65116 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000156216s
[INFO] 10.244.0.8:58264 - 16439 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00085078s
[INFO] 10.244.0.8:58264 - 27843 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000164212s
[INFO] 10.244.0.8:58264 - 55194 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000244965s
[INFO] 10.244.0.8:58264 - 42297 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000131847s
[INFO] 10.244.0.8:58264 - 4296 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000249077s
[INFO] 10.244.0.8:58264 - 25593 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000781471s
[INFO] 10.244.0.8:52540 - 37455 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000119566s
[INFO] 10.244.0.8:52540 - 37746 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000205797s
[INFO] 10.244.0.8:45833 - 44000 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000082299s
[INFO] 10.244.0.8:45833 - 44270 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000108943s
[INFO] 10.244.0.8:38702 - 44032 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000157692s
[INFO] 10.244.0.8:38702 - 44302 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000130207s
[INFO] 10.244.0.8:37346 - 62163 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000082154s
[INFO] 10.244.0.8:37346 - 62381 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00011665s
[INFO] 10.244.0.23:51626 - 18337 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000501441s
[INFO] 10.244.0.23:46816 - 57875 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000234406s
[INFO] 10.244.0.23:43533 - 34290 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000324376s
[INFO] 10.244.0.23:48310 - 43734 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000145612s
[INFO] 10.244.0.23:56656 - 35606 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000210129s
[INFO] 10.244.0.23:50951 - 5406 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117585s
[INFO] 10.244.0.23:41749 - 29672 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.004549596s
[INFO] 10.244.0.23:60842 - 5488 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004401615s
[INFO] 10.244.0.26:58516 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.002262575s
[INFO] 10.244.0.26:49475 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000268571s
==> describe nodes <==
Name: addons-610936
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-610936
kubernetes.io/os=linux
minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
minikube.k8s.io/name=addons-610936
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_11_01T09_27_31_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-610936
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 01 Nov 2025 09:27:27 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-610936
AcquireTime: <unset>
RenewTime: Sat, 01 Nov 2025 09:32:26 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 01 Nov 2025 09:30:35 +0000 Sat, 01 Nov 2025 09:27:25 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 01 Nov 2025 09:30:35 +0000 Sat, 01 Nov 2025 09:27:25 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 01 Nov 2025 09:30:35 +0000 Sat, 01 Nov 2025 09:27:25 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 01 Nov 2025 09:30:35 +0000 Sat, 01 Nov 2025 09:27:31 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.81
Hostname: addons-610936
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
System Info:
Machine ID: 067cbdb7aeda471aaaf4ef736820bc12
System UUID: 067cbdb7-aeda-471a-aaf4-ef736820bc12
Boot ID: fec582e8-2949-4206-b135-a486049758e3
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m1s
default hello-world-app-5d498dc89-d6d67 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m26s
gadget gadget-8zz4q 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m48s
ingress-nginx ingress-nginx-controller-675c5ddd98-kdk56 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m46s
kube-system amd-gpu-device-plugin-5pdrl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m52s
kube-system coredns-66bc5c9577-gbqkt 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m55s
kube-system etcd-addons-610936 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 5m
kube-system kube-apiserver-addons-610936 250m (12%) 0 (0%) 0 (0%) 0 (0%) 5m
kube-system kube-controller-manager-addons-610936 200m (10%) 0 (0%) 0 (0%) 0 (0%) 5m
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m50s
kube-system kube-proxy-wm94c 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m56s
kube-system kube-scheduler-addons-610936 100m (5%) 0 (0%) 0 (0%) 0 (0%) 5m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m49s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m53s kube-proxy
Normal NodeHasSufficientMemory 5m8s (x8 over 5m8s) kubelet Node addons-610936 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m8s (x8 over 5m8s) kubelet Node addons-610936 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m8s (x7 over 5m8s) kubelet Node addons-610936 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m8s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m1s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 5m1s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 5m kubelet Node addons-610936 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m kubelet Node addons-610936 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m kubelet Node addons-610936 status is now: NodeHasSufficientPID
Normal NodeReady 5m kubelet Node addons-610936 status is now: NodeReady
Normal RegisteredNode 4m57s node-controller Node addons-610936 event: Registered Node addons-610936 in Controller
==> dmesg <==
[ +5.151757] kauditd_printk_skb: 56 callbacks suppressed
[Nov 1 09:28] kauditd_printk_skb: 5 callbacks suppressed
[ +7.256636] kauditd_printk_skb: 11 callbacks suppressed
[ +3.234705] kauditd_printk_skb: 11 callbacks suppressed
[ +5.240489] kauditd_printk_skb: 41 callbacks suppressed
[ +5.087707] kauditd_printk_skb: 32 callbacks suppressed
[ +5.212666] kauditd_printk_skb: 101 callbacks suppressed
[ +3.528035] kauditd_printk_skb: 76 callbacks suppressed
[ +3.631994] kauditd_printk_skb: 155 callbacks suppressed
[ +0.000033] kauditd_printk_skb: 59 callbacks suppressed
[Nov 1 09:29] kauditd_printk_skb: 68 callbacks suppressed
[ +0.000581] kauditd_printk_skb: 2 callbacks suppressed
[ +12.029563] kauditd_printk_skb: 41 callbacks suppressed
[ +5.978499] kauditd_printk_skb: 22 callbacks suppressed
[ +5.706433] kauditd_printk_skb: 38 callbacks suppressed
[Nov 1 09:30] kauditd_printk_skb: 105 callbacks suppressed
[ +0.133414] kauditd_printk_skb: 216 callbacks suppressed
[ +3.778704] kauditd_printk_skb: 85 callbacks suppressed
[ +1.005873] kauditd_printk_skb: 79 callbacks suppressed
[ +2.925302] kauditd_printk_skb: 28 callbacks suppressed
[ +8.187217] kauditd_printk_skb: 37 callbacks suppressed
[ +9.452117] kauditd_printk_skb: 10 callbacks suppressed
[ +0.000030] kauditd_printk_skb: 10 callbacks suppressed
[Nov 1 09:31] kauditd_printk_skb: 41 callbacks suppressed
[Nov 1 09:32] kauditd_printk_skb: 127 callbacks suppressed
==> etcd [ab4b8cec913e5fbce9ab209d6099961cc18d592862676af287d1934a1852153c] <==
{"level":"info","ts":"2025-11-01T09:28:35.208173Z","caller":"traceutil/trace.go:172","msg":"trace[824776101] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1039; }","duration":"131.770903ms","start":"2025-11-01T09:28:35.076395Z","end":"2025-11-01T09:28:35.208166Z","steps":["trace[824776101] 'agreement among raft nodes before linearized reading' (duration: 131.613863ms)"],"step_count":1}
{"level":"info","ts":"2025-11-01T09:28:35.212732Z","caller":"traceutil/trace.go:172","msg":"trace[595837021] transaction","detail":"{read_only:false; response_revision:1040; number_of_response:1; }","duration":"197.116533ms","start":"2025-11-01T09:28:35.015599Z","end":"2025-11-01T09:28:35.212716Z","steps":["trace[595837021] 'process raft request' (duration: 192.938286ms)"],"step_count":1}
{"level":"info","ts":"2025-11-01T09:28:47.000848Z","caller":"traceutil/trace.go:172","msg":"trace[593017516] transaction","detail":"{read_only:false; response_revision:1140; number_of_response:1; }","duration":"330.242343ms","start":"2025-11-01T09:28:46.670595Z","end":"2025-11-01T09:28:47.000838Z","steps":["trace[593017516] 'process raft request' (duration: 330.139883ms)"],"step_count":1}
{"level":"info","ts":"2025-11-01T09:28:47.000819Z","caller":"traceutil/trace.go:172","msg":"trace[445217228] linearizableReadLoop","detail":"{readStateIndex:1169; appliedIndex:1169; }","duration":"311.137191ms","start":"2025-11-01T09:28:46.689579Z","end":"2025-11-01T09:28:47.000716Z","steps":["trace[445217228] 'read index received' (duration: 311.128923ms)","trace[445217228] 'applied index is now lower than readState.Index' (duration: 7.139µs)"],"step_count":2}
{"level":"warn","ts":"2025-11-01T09:28:47.001253Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"311.67117ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-01T09:28:47.001292Z","caller":"traceutil/trace.go:172","msg":"trace[560079232] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1140; }","duration":"311.699339ms","start":"2025-11-01T09:28:46.689572Z","end":"2025-11-01T09:28:47.001272Z","steps":["trace[560079232] 'agreement among raft nodes before linearized reading' (duration: 311.645098ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-01T09:28:47.001058Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T09:28:46.670576Z","time spent":"330.398047ms","remote":"127.0.0.1:53022","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1116 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
{"level":"warn","ts":"2025-11-01T09:28:47.002990Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"263.217959ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-01T09:28:47.003132Z","caller":"traceutil/trace.go:172","msg":"trace[25317338] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1140; }","duration":"263.370559ms","start":"2025-11-01T09:28:46.739752Z","end":"2025-11-01T09:28:47.003123Z","steps":["trace[25317338] 'agreement among raft nodes before linearized reading' (duration: 262.300594ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-01T09:28:47.003559Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"148.852336ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-01T09:28:47.003609Z","caller":"traceutil/trace.go:172","msg":"trace[510491791] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1140; }","duration":"148.907414ms","start":"2025-11-01T09:28:46.854696Z","end":"2025-11-01T09:28:47.003603Z","steps":["trace[510491791] 'agreement among raft nodes before linearized reading' (duration: 148.830719ms)"],"step_count":1}
{"level":"info","ts":"2025-11-01T09:28:53.254237Z","caller":"traceutil/trace.go:172","msg":"trace[1749761227] linearizableReadLoop","detail":"{readStateIndex:1192; appliedIndex:1192; }","duration":"160.961575ms","start":"2025-11-01T09:28:53.093246Z","end":"2025-11-01T09:28:53.254208Z","steps":["trace[1749761227] 'read index received' (duration: 160.954681ms)","trace[1749761227] 'applied index is now lower than readState.Index' (duration: 5.638µs)"],"step_count":2}
{"level":"warn","ts":"2025-11-01T09:28:53.254686Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.414524ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-01T09:28:53.256165Z","caller":"traceutil/trace.go:172","msg":"trace[1232810910] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:1161; }","duration":"162.719583ms","start":"2025-11-01T09:28:53.093242Z","end":"2025-11-01T09:28:53.255961Z","steps":["trace[1232810910] 'agreement among raft nodes before linearized reading' (duration: 161.107332ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-01T09:28:53.256337Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.824707ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csidrivers\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-01T09:28:53.256358Z","caller":"traceutil/trace.go:172","msg":"trace[369879715] range","detail":"{range_begin:/registry/csidrivers; range_end:; response_count:0; response_revision:1162; }","duration":"140.917307ms","start":"2025-11-01T09:28:53.115434Z","end":"2025-11-01T09:28:53.256352Z","steps":["trace[369879715] 'agreement among raft nodes before linearized reading' (duration: 140.6395ms)"],"step_count":1}
{"level":"info","ts":"2025-11-01T09:28:53.257459Z","caller":"traceutil/trace.go:172","msg":"trace[207982295] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"190.38688ms","start":"2025-11-01T09:28:53.067062Z","end":"2025-11-01T09:28:53.257449Z","steps":["trace[207982295] 'process raft request' (duration: 187.501884ms)"],"step_count":1}
{"level":"info","ts":"2025-11-01T09:28:55.183669Z","caller":"traceutil/trace.go:172","msg":"trace[988535762] transaction","detail":"{read_only:false; response_revision:1163; number_of_response:1; }","duration":"173.374046ms","start":"2025-11-01T09:28:55.010281Z","end":"2025-11-01T09:28:55.183655Z","steps":["trace[988535762] 'process raft request' (duration: 173.257031ms)"],"step_count":1}
{"level":"info","ts":"2025-11-01T09:28:56.172718Z","caller":"traceutil/trace.go:172","msg":"trace[1941284602] linearizableReadLoop","detail":"{readStateIndex:1197; appliedIndex:1197; }","duration":"143.360579ms","start":"2025-11-01T09:28:56.029339Z","end":"2025-11-01T09:28:56.172700Z","steps":["trace[1941284602] 'read index received' (duration: 143.355248ms)","trace[1941284602] 'applied index is now lower than readState.Index' (duration: 4.697µs)"],"step_count":2}
{"level":"warn","ts":"2025-11-01T09:28:56.172921Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.52208ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourceclaims\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-01T09:28:56.172942Z","caller":"traceutil/trace.go:172","msg":"trace[1245640961] range","detail":"{range_begin:/registry/resourceclaims; range_end:; response_count:0; response_revision:1165; }","duration":"143.601465ms","start":"2025-11-01T09:28:56.029335Z","end":"2025-11-01T09:28:56.172936Z","steps":["trace[1245640961] 'agreement among raft nodes before linearized reading' (duration: 143.490593ms)"],"step_count":1}
{"level":"info","ts":"2025-11-01T09:28:56.173305Z","caller":"traceutil/trace.go:172","msg":"trace[1452220831] transaction","detail":"{read_only:false; response_revision:1166; number_of_response:1; }","duration":"252.550287ms","start":"2025-11-01T09:28:55.920746Z","end":"2025-11-01T09:28:56.173297Z","steps":["trace[1452220831] 'process raft request' (duration: 252.418194ms)"],"step_count":1}
{"level":"info","ts":"2025-11-01T09:28:56.175917Z","caller":"traceutil/trace.go:172","msg":"trace[1015778233] transaction","detail":"{read_only:false; response_revision:1167; number_of_response:1; }","duration":"169.552664ms","start":"2025-11-01T09:28:56.006354Z","end":"2025-11-01T09:28:56.175907Z","steps":["trace[1015778233] 'process raft request' (duration: 169.441663ms)"],"step_count":1}
{"level":"info","ts":"2025-11-01T09:30:16.900173Z","caller":"traceutil/trace.go:172","msg":"trace[850556822] transaction","detail":"{read_only:false; response_revision:1629; number_of_response:1; }","duration":"169.094303ms","start":"2025-11-01T09:30:16.731049Z","end":"2025-11-01T09:30:16.900144Z","steps":["trace[850556822] 'process raft request' (duration: 169.007224ms)"],"step_count":1}
{"level":"info","ts":"2025-11-01T09:30:57.360637Z","caller":"traceutil/trace.go:172","msg":"trace[644747805] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1785; }","duration":"153.294128ms","start":"2025-11-01T09:30:57.207322Z","end":"2025-11-01T09:30:57.360616Z","steps":["trace[644747805] 'process raft request' (duration: 153.138128ms)"],"step_count":1}
==> kernel <==
09:32:31 up 5 min, 0 users, load average: 0.51, 1.20, 0.67
Linux addons-610936 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [8034149d00598833129fa576f1e2fc17f25643b0868c221ee401136b08eb574f] <==
E1101 09:28:31.557543 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.111.202:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.111.202:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.111.202:443: connect: connection refused" logger="UnhandledError"
E1101 09:28:31.559949 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.111.202:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.111.202:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.111.202:443: connect: connection refused" logger="UnhandledError"
I1101 09:28:31.653069 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1101 09:29:39.976820 1 conn.go:339] Error on socket receive: read tcp 192.168.39.81:8443->192.168.39.1:37408: use of closed network connection
E1101 09:29:40.185521 1 conn.go:339] Error on socket receive: read tcp 192.168.39.81:8443->192.168.39.1:37424: use of closed network connection
I1101 09:29:49.385430 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.23.18"}
I1101 09:30:05.715976 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1101 09:30:05.936493 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.157.14"}
E1101 09:30:30.456166 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
I1101 09:30:32.588266 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1101 09:30:34.466832 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I1101 09:31:01.937353 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1101 09:31:01.937427 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1101 09:31:01.972654 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1101 09:31:01.972779 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1101 09:31:02.001849 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1101 09:31:02.002438 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1101 09:31:02.081227 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1101 09:31:02.081277 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1101 09:31:02.098727 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1101 09:31:02.098790 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1101 09:31:03.081820 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
W1101 09:31:03.099146 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1101 09:31:03.144443 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
I1101 09:32:30.111114 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.105.61"}
==> kube-controller-manager [2273a9881f45e98bd51b08079c70bba61edb15367d96d4fc307a139e6efdecc0] <==
E1101 09:31:07.186269 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 09:31:07.590018 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 09:31:07.591145 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 09:31:10.031538 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 09:31:10.032674 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 09:31:11.466736 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 09:31:11.467832 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 09:31:12.177986 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 09:31:12.179041 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 09:31:19.833218 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 09:31:19.834976 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 09:31:22.812092 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 09:31:22.813096 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 09:31:23.521960 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 09:31:23.523050 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 09:31:34.706771 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 09:31:34.708057 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 09:31:45.640809 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 09:31:45.642467 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 09:31:48.392733 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 09:31:48.393806 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 09:32:16.179991 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 09:32:16.181159 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1101 09:32:23.377202 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1101 09:32:23.378443 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [b35b53f7950068a92486e0920f2ff6340f6e3caa74dd0d95fbb470ac779d65b6] <==
I1101 09:27:37.511112 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1101 09:27:37.615270 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1101 09:27:37.615321 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.81"]
E1101 09:27:37.616479 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1101 09:27:37.976427 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1101 09:27:37.976547 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1101 09:27:37.976580 1 server_linux.go:132] "Using iptables Proxier"
I1101 09:27:38.006057 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1101 09:27:38.007317 1 server.go:527] "Version info" version="v1.34.1"
I1101 09:27:38.009994 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1101 09:27:38.018945 1 config.go:200] "Starting service config controller"
I1101 09:27:38.018979 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1101 09:27:38.019098 1 config.go:106] "Starting endpoint slice config controller"
I1101 09:27:38.019104 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1101 09:27:38.019829 1 config.go:403] "Starting serviceCIDR config controller"
I1101 09:27:38.019857 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1101 09:27:38.026715 1 config.go:309] "Starting node config controller"
I1101 09:27:38.026755 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1101 09:27:38.026979 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1101 09:27:38.122849 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1101 09:27:38.122956 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1101 09:27:38.122990 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [fafb3f50759fb3ca608566a3f99c714cd2c84822225a83a2784a9703746c5e3f] <==
E1101 09:27:27.833388 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1101 09:27:27.833424 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1101 09:27:27.833460 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1101 09:27:27.833548 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1101 09:27:27.833569 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1101 09:27:27.833582 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1101 09:27:27.835128 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1101 09:27:27.835221 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1101 09:27:28.640268 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1101 09:27:28.644262 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1101 09:27:28.715929 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1101 09:27:28.752013 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1101 09:27:28.862688 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1101 09:27:28.914561 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1101 09:27:28.927796 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1101 09:27:28.986511 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1101 09:27:29.004123 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1101 09:27:29.025955 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1101 09:27:29.065699 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1101 09:27:29.205013 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1101 09:27:29.229452 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1101 09:27:29.260154 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1101 09:27:29.262692 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1101 09:27:29.298722 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
I1101 09:27:31.824472 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Nov 01 09:31:06 addons-610936 kubelet[1511]: I1101 09:31:06.076618 1511 scope.go:117] "RemoveContainer" containerID="88f0f76d37225f7e3d6ef11327954d32424d6d9ff6d03c410f83f8c86cd3f930"
Nov 01 09:31:06 addons-610936 kubelet[1511]: I1101 09:31:06.907302 1511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46e366fa-d417-486f-8411-453ae49795b5" path="/var/lib/kubelet/pods/46e366fa-d417-486f-8411-453ae49795b5/volumes"
Nov 01 09:31:06 addons-610936 kubelet[1511]: I1101 09:31:06.907739 1511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96036b32-0725-4493-bc98-d16f3b3a0eab" path="/var/lib/kubelet/pods/96036b32-0725-4493-bc98-d16f3b3a0eab/volumes"
Nov 01 09:31:11 addons-610936 kubelet[1511]: E1101 09:31:11.552156 1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761989471551616158 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588624} inodes_used:{value:201}}"
Nov 01 09:31:11 addons-610936 kubelet[1511]: E1101 09:31:11.552201 1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761989471551616158 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588624} inodes_used:{value:201}}"
Nov 01 09:31:21 addons-610936 kubelet[1511]: E1101 09:31:21.555174 1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761989481554688101 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588624} inodes_used:{value:201}}"
Nov 01 09:31:21 addons-610936 kubelet[1511]: E1101 09:31:21.555226 1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761989481554688101 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588624} inodes_used:{value:201}}"
Nov 01 09:31:31 addons-610936 kubelet[1511]: E1101 09:31:31.558799 1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761989491558333925 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588624} inodes_used:{value:201}}"
Nov 01 09:31:31 addons-610936 kubelet[1511]: E1101 09:31:31.558826 1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761989491558333925 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588624} inodes_used:{value:201}}"
Nov 01 09:31:40 addons-610936 kubelet[1511]: I1101 09:31:40.903571 1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-5pdrl" secret="" err="secret \"gcp-auth\" not found"
Nov 01 09:31:41 addons-610936 kubelet[1511]: E1101 09:31:41.562432 1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761989501561975995 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588624} inodes_used:{value:201}}"
Nov 01 09:31:41 addons-610936 kubelet[1511]: E1101 09:31:41.562480 1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761989501561975995 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588624} inodes_used:{value:201}}"
Nov 01 09:31:51 addons-610936 kubelet[1511]: E1101 09:31:51.566421 1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761989511565722612 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588624} inodes_used:{value:201}}"
Nov 01 09:31:51 addons-610936 kubelet[1511]: E1101 09:31:51.566452 1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761989511565722612 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588624} inodes_used:{value:201}}"
Nov 01 09:31:57 addons-610936 kubelet[1511]: I1101 09:31:57.902470 1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-gbqkt" secret="" err="secret \"gcp-auth\" not found"
Nov 01 09:32:00 addons-610936 kubelet[1511]: I1101 09:32:00.909052 1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Nov 01 09:32:01 addons-610936 kubelet[1511]: E1101 09:32:01.570324 1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761989521569677647 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588624} inodes_used:{value:201}}"
Nov 01 09:32:01 addons-610936 kubelet[1511]: E1101 09:32:01.570389 1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761989521569677647 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588624} inodes_used:{value:201}}"
Nov 01 09:32:11 addons-610936 kubelet[1511]: E1101 09:32:11.574263 1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761989531573660042 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588624} inodes_used:{value:201}}"
Nov 01 09:32:11 addons-610936 kubelet[1511]: E1101 09:32:11.574297 1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761989531573660042 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588624} inodes_used:{value:201}}"
Nov 01 09:32:21 addons-610936 kubelet[1511]: E1101 09:32:21.577802 1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761989541577332484 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588624} inodes_used:{value:201}}"
Nov 01 09:32:21 addons-610936 kubelet[1511]: E1101 09:32:21.577832 1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761989541577332484 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588624} inodes_used:{value:201}}"
Nov 01 09:32:30 addons-610936 kubelet[1511]: I1101 09:32:30.036460 1511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwn2v\" (UniqueName: \"kubernetes.io/projected/f13847ec-6e8a-4499-8515-1d71d187aeba-kube-api-access-xwn2v\") pod \"hello-world-app-5d498dc89-d6d67\" (UID: \"f13847ec-6e8a-4499-8515-1d71d187aeba\") " pod="default/hello-world-app-5d498dc89-d6d67"
Nov 01 09:32:31 addons-610936 kubelet[1511]: E1101 09:32:31.582582 1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761989551581810569 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588624} inodes_used:{value:201}}"
Nov 01 09:32:31 addons-610936 kubelet[1511]: E1101 09:32:31.583152 1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761989551581810569 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588624} inodes_used:{value:201}}"
==> storage-provisioner [5af9f4f51dd4ff474fd24e5769516432460f5a719cc8ada9f3335798427616bd] <==
W1101 09:32:06.790334 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:08.794394 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:08.801011 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:10.805503 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:10.814192 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:12.817718 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:12.825945 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:14.829944 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:14.837382 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:16.842922 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:16.851119 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:18.855676 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:18.867002 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:20.870691 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:20.876629 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:22.881276 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:22.887611 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:24.891915 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:24.898720 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:26.903100 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:26.914849 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:28.920028 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:28.926735 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:30.936190 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 09:32:30.945614 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-610936 -n addons-610936
helpers_test.go:269: (dbg) Run: kubectl --context addons-610936 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-d6d67 ingress-nginx-admission-create-v2tvv ingress-nginx-admission-patch-k7flz
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context addons-610936 describe pod hello-world-app-5d498dc89-d6d67 ingress-nginx-admission-create-v2tvv ingress-nginx-admission-patch-k7flz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-610936 describe pod hello-world-app-5d498dc89-d6d67 ingress-nginx-admission-create-v2tvv ingress-nginx-admission-patch-k7flz: exit status 1 (77.723979ms)
-- stdout --
Name: hello-world-app-5d498dc89-d6d67
Namespace: default
Priority: 0
Service Account: default
Node: addons-610936/192.168.39.81
Start Time: Sat, 01 Nov 2025 09:32:30 +0000
Labels: app=hello-world-app
pod-template-hash=5d498dc89
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-5d498dc89
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwn2v (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-xwn2v:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2s default-scheduler Successfully assigned default/hello-world-app-5d498dc89-d6d67 to addons-610936
Normal Pulling 2s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
Normal Pulled 1s kubelet Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 1.376s (1.376s including waiting). Image size: 4944818 bytes.
Normal Created 0s kubelet Created container: hello-world-app
Normal Started 0s kubelet Started container hello-world-app
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-v2tvv" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-k7flz" not found
** /stderr **
helpers_test.go:287: kubectl --context addons-610936 describe pod hello-world-app-5d498dc89-d6d67 ingress-nginx-admission-create-v2tvv ingress-nginx-admission-patch-k7flz: exit status 1
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-610936 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-610936 addons disable ingress-dns --alsologtostderr -v=1: (1.15313668s)
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-610936 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-610936 addons disable ingress --alsologtostderr -v=1: (7.845391546s)
--- FAIL: TestAddons/parallel/Ingress (156.43s)