=== RUN TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 9.630938ms
addons_test.go:868: volcano-scheduler stabilized in 9.668999ms
addons_test.go:876: volcano-admission stabilized in 9.715784ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-xpx6k" [b36e9518-42d7-4650-86e6-facb44dadd1c] Pending / Ready:ContainersNotReady (containers with unready status: [volcano-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [volcano-scheduler])
addons_test.go:890: ***** TestAddons/serial/Volcano: pod "app=volcano-scheduler" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:890: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
addons_test.go:890: TestAddons/serial/Volcano: showing logs for failed pods as of 2025-10-18 14:06:11.817556223 +0000 UTC m=+519.699826600
addons_test.go:890: (dbg) Run: kubectl --context minikube describe po volcano-scheduler-76c996c8bf-xpx6k -n volcano-system
addons_test.go:890: (dbg) kubectl --context minikube describe po volcano-scheduler-76c996c8bf-xpx6k -n volcano-system:
Name: volcano-scheduler-76c996c8bf-xpx6k
Namespace: volcano-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Service Account: volcano-scheduler
Node: ubuntu-20-agent-6/10.154.0.2
Start Time: Sat, 18 Oct 2025 13:58:43 +0000
Labels: app=volcano-scheduler
pod-template-hash=76c996c8bf
Annotations: <none>
Status: Pending
SeccompProfile: RuntimeDefault
IP: 10.244.0.20
IPs:
IP: 10.244.0.20
Controlled By: ReplicaSet/volcano-scheduler-76c996c8bf
Containers:
volcano-scheduler:
Container ID:
Image: docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34
Image ID:
Port: <none>
Host Port: <none>
Args:
--logtostderr
--scheduler-conf=/volcano.scheduler/volcano-scheduler.conf
--enable-healthz=true
--enable-metrics=true
--leader-elect=false
--kube-api-qps=2000
--kube-api-burst=2000
--schedule-period=1s
--node-worker-threads=20
-v=3
2>&1
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
DEBUG_SOCKET_DIR: /tmp/klog-socks
Mounts:
/tmp/klog-socks from klog-sock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q2f6t (ro)
/volcano.scheduler from scheduler-config (rw)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
scheduler-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: volcano-scheduler-configmap
Optional: false
klog-sock:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-q2f6t:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m28s default-scheduler Successfully assigned volcano-system/volcano-scheduler-76c996c8bf-xpx6k to ubuntu-20-agent-6
Warning Failed 6m15s kubelet Failed to pull image "docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal Pulling 3m59s (x5 over 7m27s) kubelet Pulling image "docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
Warning Failed 3m58s (x4 over 6m55s) kubelet Failed to pull image "docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 3m58s (x5 over 6m55s) kubelet Error: ErrImagePull
Warning Failed 115s (x20 over 6m55s) kubelet Error: ImagePullBackOff
Normal BackOff 104s (x21 over 6m55s) kubelet Back-off pulling image "docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
addons_test.go:890: (dbg) Run: kubectl --context minikube logs volcano-scheduler-76c996c8bf-xpx6k -n volcano-system
addons_test.go:890: (dbg) Non-zero exit: kubectl --context minikube logs volcano-scheduler-76c996c8bf-xpx6k -n volcano-system: exit status 1 (78.338054ms)
** stderr **
Error from server (BadRequest): container "volcano-scheduler" in pod "volcano-scheduler-76c996c8bf-xpx6k" is waiting to start: trying and failing to pull image
** /stderr **
addons_test.go:890: kubectl --context minikube logs volcano-scheduler-76c996c8bf-xpx6k -n volcano-system: exit status 1
addons_test.go:891: failed waiting for app=volcano-scheduler pod: app=volcano-scheduler within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestAddons/serial/Volcano]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:252: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p minikube logs -n 25: (1.277804844s)
helpers_test.go:260: TestAddons/serial/Volcano logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
──────────────────────────────────────────────┬──────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
──────────────────────────────────────────────┼──────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ start │ -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm │ minikube │ jenkins │ v1.37.0 │ 18 Oct 25 13:57 UTC │ │
│ delete │ --all │ minikube │ jenkins │ v1.37.0 │ 18 Oct 25 13:57 UTC │ 18 Oct 25 13:57 UTC │
│ delete │ -p minikube │ minikube │ jenkins │ v1.37.0 │ 18 Oct 25 13:57 UTC │ 18 Oct 25 13:57 UTC │
│ start │ -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm │ minikube │ jenkins │ v1.37.0 │ 18 Oct 25 13:57 UTC │ │
│ delete │ --all │ minikube │ jenkins │ v1.37.0 │ 18 Oct 25 13:57 UTC │ 18 Oct 25 13:57 UTC │
│ delete │ -p minikube │ minikube │ jenkins │ v1.37.0 │ 18 Oct 25 13:57 UTC │ 18 Oct 25 13:57 UTC │
│ delete │ -p minikube │ minikube │ jenkins │ v1.37.0 │ 18 Oct 25 13:57 UTC │ 18 Oct 25 13:57 UTC │
│ delete │ -p minikube │ minikube │ jenkins │ v1.37.0 │ 18 Oct 25 13:57 UTC │ 18 Oct 25 13:57 UTC │
│ start │ --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:44553 --driver=none --bootstrapper=kubeadm │ minikube │ jenkins │ v1.37.0 │ 18 Oct 25 13:57 UTC │ │
│ delete │ -p minikube │ minikube │ jenkins │ v1.37.0 │ 18 Oct 25 13:57 UTC │ 18 Oct 25 13:57 UTC │
│ start │ -p minikube --alsologtostderr -v=1 --memory=3072 --wait=true --driver=none --bootstrapper=kubeadm │ minikube │ jenkins │ v1.37.0 │ 18 Oct 25 13:57 UTC │ 18 Oct 25 13:58 UTC │
│ delete │ -p minikube │ minikube │ jenkins │ v1.37.0 │ 18 Oct 25 13:58 UTC │ 18 Oct 25 13:58 UTC │
│ addons │ enable dashboard -p minikube │ minikube │ jenkins │ v1.37.0 │ 18 Oct 25 13:58 UTC │ │
│ addons │ disable dashboard -p minikube │ minikube │ jenkins │ v1.37.0 │ 18 Oct 25 13:58 UTC │ │
│ start │ -p minikube --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=none --bootstrapper=kubeadm │ minikube │ jenkins │ v1.37.0 │ 18 Oct 25 13:58 UTC │ 18 Oct 25 14:00 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
──────────────────────────────────────────────┴──────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/10/18 13:58:20
Running on machine: ubuntu-20-agent-6
Binary: Built with gc go1.24.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1018 13:58:20.038239 387939 out.go:360] Setting OutFile to fd 1 ...
I1018 13:58:20.038564 387939 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 13:58:20.038576 387939 out.go:374] Setting ErrFile to fd 2...
I1018 13:58:20.038582 387939 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 13:58:20.038830 387939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-380490/.minikube/bin
I1018 13:58:20.039445 387939 out.go:368] Setting JSON to false
I1018 13:58:20.040384 387939 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6042,"bootTime":1760789858,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1018 13:58:20.040479 387939 start.go:141] virtualization: kvm guest
I1018 13:58:20.042567 387939 out.go:179] * minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
W1018 13:58:20.043690 387939 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21409-380490/.minikube/cache/preloaded-tarball: no such file or directory
I1018 13:58:20.043728 387939 notify.go:220] Checking for updates...
I1018 13:58:20.043748 387939 out.go:179] - MINIKUBE_LOCATION=21409
I1018 13:58:20.044916 387939 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1018 13:58:20.046100 387939 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21409-380490/kubeconfig
I1018 13:58:20.047158 387939 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-380490/.minikube
I1018 13:58:20.048217 387939 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1018 13:58:20.052902 387939 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1018 13:58:20.054214 387939 driver.go:421] Setting default libvirt URI to qemu:///system
I1018 13:58:20.067938 387939 out.go:179] * Using the none driver based on user configuration
I1018 13:58:20.069198 387939 start.go:305] selected driver: none
I1018 13:58:20.069217 387939 start.go:925] validating driver "none" against <nil>
I1018 13:58:20.069229 387939 start.go:936] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1018 13:58:20.069287 387939 start.go:1754] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
W1018 13:58:20.069606 387939 out.go:285] ! The 'none' driver does not respect the --memory flag
I1018 13:58:20.070234 387939 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1018 13:58:20.070530 387939 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1018 13:58:20.070572 387939 cni.go:84] Creating CNI manager for ""
I1018 13:58:20.070627 387939 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1018 13:58:20.070639 387939 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1018 13:58:20.070695 387939 start.go:349] cluster config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1018 13:58:20.072143 387939 out.go:179] * Starting "minikube" primary control-plane node in "minikube" cluster
I1018 13:58:20.073613 387939 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/config.json ...
I1018 13:58:20.073650 387939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/config.json: {Name:mk53678720a7eb0531be7b6baf7245b571f7ab9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 13:58:20.073775 387939 start.go:360] acquireMachinesLock for minikube: {Name:mkd732e8976d49b03c78aff25f0df1ef7d40698d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1018 13:58:20.073802 387939 start.go:364] duration metric: took 15.435µs to acquireMachinesLock for "minikube"
I1018 13:58:20.073812 387939 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I1018 13:58:20.073865 387939 start.go:125] createHost starting for "" (driver="none")
I1018 13:58:20.075197 387939 out.go:179] * Running on localhost (CPUs=8, Memory=32093MB, Disk=297540MB) ...
I1018 13:58:20.076255 387939 exec_runner.go:51] Run: systemctl --version
I1018 13:58:20.078431 387939 start.go:159] libmachine.API.Create for "minikube" (driver="none")
I1018 13:58:20.078469 387939 client.go:168] LocalClient.Create starting
I1018 13:58:20.078533 387939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-380490/.minikube/certs/ca.pem
I1018 13:58:20.078574 387939 main.go:141] libmachine: Decoding PEM data...
I1018 13:58:20.078593 387939 main.go:141] libmachine: Parsing certificate...
I1018 13:58:20.078643 387939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-380490/.minikube/certs/cert.pem
I1018 13:58:20.078672 387939 main.go:141] libmachine: Decoding PEM data...
I1018 13:58:20.078688 387939 main.go:141] libmachine: Parsing certificate...
I1018 13:58:20.079002 387939 client.go:171] duration metric: took 524.068µs to LocalClient.Create
I1018 13:58:20.079032 387939 start.go:167] duration metric: took 602.775µs to libmachine.API.Create "minikube"
I1018 13:58:20.079040 387939 start.go:293] postStartSetup for "minikube" (driver="none")
I1018 13:58:20.079085 387939 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1018 13:58:20.079138 387939 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1018 13:58:20.090849 387939 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1018 13:58:20.090888 387939 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1018 13:58:20.090900 387939 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1018 13:58:20.092832 387939 out.go:179] * OS release is Ubuntu 22.04.5 LTS
I1018 13:58:20.093940 387939 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-380490/.minikube/addons for local assets ...
I1018 13:58:20.094013 387939 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-380490/.minikube/files for local assets ...
I1018 13:58:20.094036 387939 start.go:296] duration metric: took 14.989587ms for postStartSetup
I1018 13:58:20.094677 387939 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/config.json ...
I1018 13:58:20.094824 387939 start.go:128] duration metric: took 20.949928ms to createHost
I1018 13:58:20.094838 387939 start.go:83] releasing machines lock for "minikube", held for 21.029922ms
I1018 13:58:20.095181 387939 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1018 13:58:20.095278 387939 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
W1018 13:58:20.097329 387939 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1018 13:58:20.097417 387939 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1018 13:58:20.109742 387939 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1018 13:58:20.109786 387939 start.go:495] detecting cgroup driver to use...
I1018 13:58:20.109822 387939 detect.go:190] detected "systemd" cgroup driver on host os
I1018 13:58:20.109975 387939 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1018 13:58:20.135860 387939 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1018 13:58:20.149023 387939 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1018 13:58:20.161916 387939 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
I1018 13:58:20.161997 387939 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1018 13:58:20.175022 387939 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1018 13:58:20.187132 387939 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1018 13:58:20.199507 387939 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1018 13:58:20.212152 387939 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1018 13:58:20.224122 387939 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1018 13:58:20.236085 387939 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1018 13:58:20.247520 387939 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1018 13:58:20.260880 387939 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1018 13:58:20.271931 387939 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1018 13:58:20.282425 387939 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1018 13:58:20.498794 387939 exec_runner.go:51] Run: sudo systemctl restart containerd
I1018 13:58:20.575228 387939 start.go:495] detecting cgroup driver to use...
I1018 13:58:20.575284 387939 detect.go:190] detected "systemd" cgroup driver on host os
I1018 13:58:20.575459 387939 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1018 13:58:20.601506 387939 exec_runner.go:51] Run: which cri-dockerd
I1018 13:58:20.602712 387939 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1018 13:58:20.614606 387939 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
I1018 13:58:20.614646 387939 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
I1018 13:58:20.614699 387939 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
I1018 13:58:20.626790 387939 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
I1018 13:58:20.627046 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube794066254 /etc/systemd/system/cri-docker.service.d/10-cni.conf
I1018 13:58:20.638861 387939 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I1018 13:58:20.859515 387939 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I1018 13:58:21.073790 387939 docker.go:575] configuring docker to use "systemd" as cgroup driver...
I1018 13:58:21.073924 387939 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
I1018 13:58:21.073935 387939 exec_runner.go:203] rm: /etc/docker/daemon.json
I1018 13:58:21.073970 387939 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
I1018 13:58:21.085105 387939 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (129 bytes)
I1018 13:58:21.085326 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1433564333 /etc/docker/daemon.json
I1018 13:58:21.096769 387939 exec_runner.go:51] Run: sudo systemctl reset-failed docker
I1018 13:58:21.110376 387939 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1018 13:58:21.329115 387939 exec_runner.go:51] Run: sudo systemctl restart docker
I1018 13:58:22.119233 387939 exec_runner.go:51] Run: sudo systemctl is-active --quiet service docker
I1018 13:58:22.133626 387939 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
I1018 13:58:22.147661 387939 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
I1018 13:58:22.166613 387939 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I1018 13:58:22.181933 387939 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
I1018 13:58:22.398853 387939 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I1018 13:58:22.619356 387939 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1018 13:58:22.830862 387939 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
I1018 13:58:22.856824 387939 exec_runner.go:51] Run: sudo systemctl reset-failed cri-docker.service
I1018 13:58:22.870993 387939 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1018 13:58:23.085008 387939 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
I1018 13:58:23.172007 387939 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I1018 13:58:23.186410 387939 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1018 13:58:23.186496 387939 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I1018 13:58:23.187883 387939 start.go:563] Will wait 60s for crictl version
I1018 13:58:23.187929 387939 exec_runner.go:51] Run: which crictl
I1018 13:58:23.189189 387939 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
I1018 13:58:23.222795 387939 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 28.5.1
RuntimeApiVersion: v1
I1018 13:58:23.222872 387939 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I1018 13:58:23.246838 387939 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I1018 13:58:23.271983 387939 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.5.1 ...
I1018 13:58:23.272064 387939 exec_runner.go:51] Run: grep 127.0.0.1 host.minikube.internal$ /etc/hosts
I1018 13:58:23.274848 387939 out.go:179] - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
I1018 13:58:23.275690 387939 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1018 13:58:23.275822 387939 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
I1018 13:58:23.275832 387939 kubeadm.go:934] updating node { 10.154.0.2 8443 v1.34.1 docker true true} ...
I1018 13:58:23.275957 387939 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-6 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.154.0.2 --resolv-conf=/run/systemd/resolve/resolv.conf
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
I1018 13:58:23.276004 387939 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
I1018 13:58:23.334425 387939 cni.go:84] Creating CNI manager for ""
I1018 13:58:23.334456 387939 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1018 13:58:23.334477 387939 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1018 13:58:23.334497 387939 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.154.0.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-6 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.154.0.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.154.0.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1018 13:58:23.334639 387939 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.154.0.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ubuntu-20-agent-6"
kubeletExtraArgs:
- name: "node-ip"
value: "10.154.0.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "10.154.0.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1018 13:58:23.334712 387939 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1018 13:58:23.346527 387939 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: exit status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
Initiating transfer...
I1018 13:58:23.346588 387939 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
I1018 13:58:23.358129 387939 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256
I1018 13:58:23.358140 387939 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
I1018 13:58:23.358190 387939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-380490/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
I1018 13:58:23.358194 387939 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I1018 13:58:23.358146 387939 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
I1018 13:58:23.358366 387939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-380490/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
I1018 13:58:23.373850 387939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-380490/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
I1018 13:58:23.406189 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1842859082 /var/lib/minikube/binaries/v1.34.1/kubectl
I1018 13:58:23.418833 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2757408579 /var/lib/minikube/binaries/v1.34.1/kubeadm
I1018 13:58:23.423194 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2943325858 /var/lib/minikube/binaries/v1.34.1/kubelet
I1018 13:58:23.482020 387939 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1018 13:58:23.493420 387939 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
I1018 13:58:23.493445 387939 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I1018 13:58:23.493510 387939 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I1018 13:58:23.504592 387939 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
I1018 13:58:23.504773 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1613476617 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I1018 13:58:23.516424 387939 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
I1018 13:58:23.516459 387939 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
I1018 13:58:23.516510 387939 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
I1018 13:58:23.537415 387939 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1018 13:58:23.537602 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3567366269 /lib/systemd/system/kubelet.service
I1018 13:58:23.549242 387939 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
I1018 13:58:23.549430 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4167056002 /var/tmp/minikube/kubeadm.yaml.new
I1018 13:58:23.561160 387939 exec_runner.go:51] Run: grep 10.154.0.2 control-plane.minikube.internal$ /etc/hosts
I1018 13:58:23.562675 387939 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1018 13:58:23.779526 387939 exec_runner.go:51] Run: sudo systemctl start kubelet
I1018 13:58:23.804212 387939 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube for IP: 10.154.0.2
I1018 13:58:23.804240 387939 certs.go:195] generating shared ca certs ...
I1018 13:58:23.804265 387939 certs.go:227] acquiring lock for ca certs: {Name:mkd0ebfe8e1ca5aec698b6afa3355eaa3e2f8129 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 13:58:23.804471 387939 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-380490/.minikube/ca.key
I1018 13:58:23.804575 387939 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-380490/.minikube/proxy-client-ca.key
I1018 13:58:23.804594 387939 certs.go:257] generating profile certs ...
I1018 13:58:23.804674 387939 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/client.key
I1018 13:58:23.804698 387939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/client.crt with IP's: []
I1018 13:58:23.829236 387939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/client.crt ...
I1018 13:58:23.829268 387939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/client.crt: {Name:mka66d22242822081803c868fe3301cd24d51f50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 13:58:23.829462 387939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/client.key ...
I1018 13:58:23.829479 387939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/client.key: {Name:mk524411ef717f87c7c1d438a1ae7bce18ebed9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 13:58:23.829580 387939 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/apiserver.key.d19e605b
I1018 13:58:23.829606 387939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/apiserver.crt.d19e605b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.154.0.2]
I1018 13:58:24.074697 387939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/apiserver.crt.d19e605b ...
I1018 13:58:24.074731 387939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/apiserver.crt.d19e605b: {Name:mk751b4307f8ab54b7f91e9c7b00ec47b4e3b1d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 13:58:24.074909 387939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/apiserver.key.d19e605b ...
I1018 13:58:24.074931 387939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/apiserver.key.d19e605b: {Name:mka13d0c87261a49901fcf1b8c721c4aeb55f11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 13:58:24.075025 387939 certs.go:382] copying /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/apiserver.crt.d19e605b -> /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/apiserver.crt
I1018 13:58:24.075166 387939 certs.go:386] copying /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/apiserver.key.d19e605b -> /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/apiserver.key
I1018 13:58:24.075267 387939 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/proxy-client.key
I1018 13:58:24.075290 387939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I1018 13:58:24.191595 387939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/proxy-client.crt ...
I1018 13:58:24.191627 387939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/proxy-client.crt: {Name:mk582e171a9bb45907520a6537dd03a508913bb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 13:58:24.191796 387939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/proxy-client.key ...
I1018 13:58:24.191814 387939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/proxy-client.key: {Name:mkb8e49af75659973c7346e7eafa91951096356e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 13:58:24.192047 387939 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-380490/.minikube/certs/ca-key.pem (1679 bytes)
I1018 13:58:24.192095 387939 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-380490/.minikube/certs/ca.pem (1082 bytes)
I1018 13:58:24.192132 387939 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-380490/.minikube/certs/cert.pem (1123 bytes)
I1018 13:58:24.192164 387939 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-380490/.minikube/certs/key.pem (1675 bytes)
I1018 13:58:24.192871 387939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-380490/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1018 13:58:24.193034 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2897527981 /var/lib/minikube/certs/ca.crt
I1018 13:58:24.204778 387939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-380490/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1018 13:58:24.204941 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube935520560 /var/lib/minikube/certs/ca.key
I1018 13:58:24.216268 387939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-380490/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1018 13:58:24.216454 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1468571603 /var/lib/minikube/certs/proxy-client-ca.crt
I1018 13:58:24.227532 387939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-380490/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1018 13:58:24.227694 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2267189522 /var/lib/minikube/certs/proxy-client-ca.key
I1018 13:58:24.238635 387939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
I1018 13:58:24.238779 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3928889990 /var/lib/minikube/certs/apiserver.crt
I1018 13:58:24.249977 387939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1018 13:58:24.250136 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3679660204 /var/lib/minikube/certs/apiserver.key
I1018 13:58:24.261616 387939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1018 13:58:24.261826 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube563441558 /var/lib/minikube/certs/proxy-client.crt
I1018 13:58:24.272936 387939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-380490/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1018 13:58:24.273085 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube811889102 /var/lib/minikube/certs/proxy-client.key
I1018 13:58:24.284220 387939 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
I1018 13:58:24.284241 387939 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
I1018 13:58:24.284282 387939 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
I1018 13:58:24.294973 387939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-380490/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1018 13:58:24.295120 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2767908444 /usr/share/ca-certificates/minikubeCA.pem
I1018 13:58:24.306585 387939 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1018 13:58:24.306746 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4204088561 /var/lib/minikube/kubeconfig
I1018 13:58:24.317873 387939 exec_runner.go:51] Run: openssl version
I1018 13:58:24.320903 387939 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1018 13:58:24.332855 387939 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1018 13:58:24.334223 387939 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Oct 18 13:58 /usr/share/ca-certificates/minikubeCA.pem
I1018 13:58:24.334266 387939 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1018 13:58:24.338690 387939 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1018 13:58:24.350115 387939 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1018 13:58:24.351354 387939 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1018 13:58:24.351401 387939 kubeadm.go:400] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1018 13:58:24.351517 387939 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1018 13:58:24.369728 387939 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1018 13:58:24.381425 387939 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1018 13:58:24.392683 387939 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I1018 13:58:24.415041 387939 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1018 13:58:24.426527 387939 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1018 13:58:24.426547 387939 kubeadm.go:157] found existing configuration files:
I1018 13:58:24.426604 387939 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1018 13:58:24.437788 387939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1018 13:58:24.437848 387939 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
I1018 13:58:24.448553 387939 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1018 13:58:24.459462 387939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1018 13:58:24.459523 387939 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1018 13:58:24.469992 387939 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1018 13:58:24.480606 387939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1018 13:58:24.480667 387939 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1018 13:58:24.491164 387939 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1018 13:58:24.502214 387939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1018 13:58:24.502313 387939 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1018 13:58:24.513628 387939 exec_runner.go:97] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1018 13:58:24.557457 387939 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
I1018 13:58:24.557488 387939 kubeadm.go:318] [preflight] Running pre-flight checks
I1018 13:58:24.650210 387939 kubeadm.go:318] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1018 13:58:24.650416 387939 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
I1018 13:58:24.650432 387939 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1018 13:58:24.650438 387939 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1018 13:58:24.662611 387939 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1018 13:58:24.666457 387939 out.go:252] - Generating certificates and keys ...
I1018 13:58:24.666499 387939 kubeadm.go:318] [certs] Using existing ca certificate authority
I1018 13:58:24.666509 387939 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
I1018 13:58:25.083076 387939 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
I1018 13:58:25.160404 387939 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
I1018 13:58:25.506660 387939 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
I1018 13:58:26.150895 387939 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
I1018 13:58:26.553568 387939 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
I1018 13:58:26.553608 387939 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-6] and IPs [10.154.0.2 127.0.0.1 ::1]
I1018 13:58:26.728169 387939 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
I1018 13:58:26.728330 387939 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-6] and IPs [10.154.0.2 127.0.0.1 ::1]
I1018 13:58:27.043934 387939 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
I1018 13:58:27.358848 387939 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
I1018 13:58:27.845409 387939 kubeadm.go:318] [certs] Generating "sa" key and public key
I1018 13:58:27.845583 387939 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1018 13:58:27.982932 387939 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
I1018 13:58:28.124873 387939 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1018 13:58:28.676122 387939 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1018 13:58:28.956694 387939 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1018 13:58:29.296274 387939 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1018 13:58:29.296867 387939 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1018 13:58:29.298983 387939 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1018 13:58:29.301336 387939 out.go:252] - Booting up control plane ...
I1018 13:58:29.301369 387939 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1018 13:58:29.301392 387939 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1018 13:58:29.301703 387939 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1018 13:58:29.316980 387939 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1018 13:58:29.317045 387939 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1018 13:58:29.322075 387939 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1018 13:58:29.322437 387939 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1018 13:58:29.322484 387939 kubeadm.go:318] [kubelet-start] Starting the kubelet
I1018 13:58:29.559709 387939 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1018 13:58:29.559733 387939 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1018 13:58:30.560760 387939 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001019986s
I1018 13:58:30.564648 387939 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1018 13:58:30.564677 387939 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://10.154.0.2:8443/livez
I1018 13:58:30.564682 387939 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1018 13:58:30.564686 387939 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1018 13:58:32.352232 387939 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.787577991s
I1018 13:58:32.959751 387939 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.395227053s
I1018 13:58:34.566150 387939 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.00150824s
I1018 13:58:34.577484 387939 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1018 13:58:34.587338 387939 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1018 13:58:34.596980 387939 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
I1018 13:58:34.597008 387939 kubeadm.go:318] [mark-control-plane] Marking the node ubuntu-20-agent-6 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1018 13:58:34.605117 387939 kubeadm.go:318] [bootstrap-token] Using token: ey4h38.oogtmjl4nsl1kgd1
I1018 13:58:34.606588 387939 out.go:252] - Configuring RBAC rules ...
I1018 13:58:34.606622 387939 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1018 13:58:34.609741 387939 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1018 13:58:34.615515 387939 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1018 13:58:34.617904 387939 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1018 13:58:34.620468 387939 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1018 13:58:34.622947 387939 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1018 13:58:34.972952 387939 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1018 13:58:35.397500 387939 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
I1018 13:58:35.973572 387939 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
I1018 13:58:35.974513 387939 kubeadm.go:318]
I1018 13:58:35.974532 387939 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
I1018 13:58:35.974536 387939 kubeadm.go:318]
I1018 13:58:35.974541 387939 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
I1018 13:58:35.974544 387939 kubeadm.go:318]
I1018 13:58:35.974548 387939 kubeadm.go:318] mkdir -p $HOME/.kube
I1018 13:58:35.974552 387939 kubeadm.go:318] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1018 13:58:35.974555 387939 kubeadm.go:318] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1018 13:58:35.974573 387939 kubeadm.go:318]
I1018 13:58:35.974577 387939 kubeadm.go:318] Alternatively, if you are the root user, you can run:
I1018 13:58:35.974580 387939 kubeadm.go:318]
I1018 13:58:35.974585 387939 kubeadm.go:318] export KUBECONFIG=/etc/kubernetes/admin.conf
I1018 13:58:35.974589 387939 kubeadm.go:318]
I1018 13:58:35.974592 387939 kubeadm.go:318] You should now deploy a pod network to the cluster.
I1018 13:58:35.974596 387939 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1018 13:58:35.974601 387939 kubeadm.go:318] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1018 13:58:35.974611 387939 kubeadm.go:318]
I1018 13:58:35.974615 387939 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
I1018 13:58:35.974623 387939 kubeadm.go:318] and service account keys on each node and then running the following as root:
I1018 13:58:35.974626 387939 kubeadm.go:318]
I1018 13:58:35.974631 387939 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ey4h38.oogtmjl4nsl1kgd1 \
I1018 13:58:35.974634 387939 kubeadm.go:318] --discovery-token-ca-cert-hash sha256:80e399d493c731c91a727d6889d58c0bcc9599ede579291df60ae63b13551066 \
I1018 13:58:35.974637 387939 kubeadm.go:318] --control-plane
I1018 13:58:35.974640 387939 kubeadm.go:318]
I1018 13:58:35.974642 387939 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
I1018 13:58:35.974645 387939 kubeadm.go:318]
I1018 13:58:35.974648 387939 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ey4h38.oogtmjl4nsl1kgd1 \
I1018 13:58:35.974666 387939 kubeadm.go:318] --discovery-token-ca-cert-hash sha256:80e399d493c731c91a727d6889d58c0bcc9599ede579291df60ae63b13551066
I1018 13:58:35.978102 387939 cni.go:84] Creating CNI manager for ""
I1018 13:58:35.978131 387939 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1018 13:58:35.979968 387939 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1018 13:58:35.981354 387939 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
I1018 13:58:35.996700 387939 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1018 13:58:35.996885 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3303131056 /etc/cni/net.d/1-k8s.conflist
I1018 13:58:36.012995 387939 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1018 13:58:36.013087 387939 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1018 13:58:36.013101 387939 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-6 minikube.k8s.io/updated_at=2025_10_18T13_58_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
I1018 13:58:36.023931 387939 ops.go:34] apiserver oom_adj: -16
I1018 13:58:36.087711 387939 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1018 13:58:36.588600 387939 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1018 13:58:37.088549 387939 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1018 13:58:37.588544 387939 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1018 13:58:38.088430 387939 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1018 13:58:38.588498 387939 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1018 13:58:39.088451 387939 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1018 13:58:39.588163 387939 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1018 13:58:40.088686 387939 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1018 13:58:40.588591 387939 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1018 13:58:40.660582 387939 kubeadm.go:1113] duration metric: took 4.647569091s to wait for elevateKubeSystemPrivileges
I1018 13:58:40.660620 387939 kubeadm.go:402] duration metric: took 16.309224645s to StartCluster
I1018 13:58:40.660642 387939 settings.go:142] acquiring lock: {Name:mk9213f9faf5fdbed8267950b94240133515cd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 13:58:40.660706 387939 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21409-380490/kubeconfig
I1018 13:58:40.661371 387939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-380490/kubeconfig: {Name:mk3cf964818f68b6b45525de80bd06000bbfae6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 13:58:40.661614 387939 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1018 13:58:40.661680 387939 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:false volcano:true volumesnapshots:true yakd:true]
I1018 13:58:40.661801 387939 addons.go:69] Setting yakd=true in profile "minikube"
I1018 13:58:40.661795 387939 addons.go:69] Setting gcp-auth=true in profile "minikube"
I1018 13:58:40.661814 387939 addons.go:69] Setting amd-gpu-device-plugin=true in profile "minikube"
I1018 13:58:40.661826 387939 addons.go:238] Setting addon yakd=true in "minikube"
I1018 13:58:40.661835 387939 addons.go:238] Setting addon amd-gpu-device-plugin=true in "minikube"
I1018 13:58:40.661839 387939 mustload.go:65] Loading cluster: minikube
I1018 13:58:40.661841 387939 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
I1018 13:58:40.661855 387939 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 13:58:40.661869 387939 host.go:66] Checking if "minikube" exists ...
I1018 13:58:40.661871 387939 host.go:66] Checking if "minikube" exists ...
I1018 13:58:40.661902 387939 addons.go:69] Setting registry=true in profile "minikube"
I1018 13:58:40.661902 387939 addons.go:238] Setting addon csi-hostpath-driver=true in "minikube"
I1018 13:58:40.661913 387939 addons.go:238] Setting addon registry=true in "minikube"
I1018 13:58:40.661935 387939 host.go:66] Checking if "minikube" exists ...
I1018 13:58:40.661948 387939 addons.go:69] Setting default-storageclass=true in profile "minikube"
I1018 13:58:40.661961 387939 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I1018 13:58:40.662039 387939 addons.go:69] Setting metrics-server=true in profile "minikube"
I1018 13:58:40.662054 387939 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 13:58:40.662066 387939 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
I1018 13:58:40.662088 387939 addons.go:238] Setting addon inspektor-gadget=true in "minikube"
I1018 13:58:40.662112 387939 host.go:66] Checking if "minikube" exists ...
I1018 13:58:40.662597 387939 kubeconfig.go:125] found "minikube" server: "https://10.154.0.2:8443"
I1018 13:58:40.662624 387939 kubeconfig.go:125] found "minikube" server: "https://10.154.0.2:8443"
I1018 13:58:40.662060 387939 addons.go:238] Setting addon metrics-server=true in "minikube"
I1018 13:58:40.662634 387939 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
I1018 13:58:40.662641 387939 api_server.go:166] Checking apiserver status ...
I1018 13:58:40.662646 387939 kubeconfig.go:125] found "minikube" server: "https://10.154.0.2:8443"
I1018 13:58:40.662663 387939 addons.go:69] Setting registry-creds=true in profile "minikube"
I1018 13:58:40.662669 387939 api_server.go:166] Checking apiserver status ...
I1018 13:58:40.661804 387939 addons.go:69] Setting cloud-spanner=true in profile "minikube"
I1018 13:58:40.662675 387939 addons.go:69] Setting volumesnapshots=true in profile "minikube"
I1018 13:58:40.662679 387939 addons.go:238] Setting addon registry-creds=true in "minikube"
I1018 13:58:40.662682 387939 addons.go:238] Setting addon cloud-spanner=true in "minikube"
I1018 13:58:40.662687 387939 addons.go:238] Setting addon volumesnapshots=true in "minikube"
I1018 13:58:40.662690 387939 addons.go:69] Setting volcano=true in profile "minikube"
I1018 13:58:40.662702 387939 host.go:66] Checking if "minikube" exists ...
I1018 13:58:40.662702 387939 addons.go:238] Setting addon volcano=true in "minikube"
I1018 13:58:40.662705 387939 host.go:66] Checking if "minikube" exists ...
I1018 13:58:40.662707 387939 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1018 13:58:40.661937 387939 host.go:66] Checking if "minikube" exists ...
I1018 13:58:40.662721 387939 host.go:66] Checking if "minikube" exists ...
I1018 13:58:40.662779 387939 kubeconfig.go:125] found "minikube" server: "https://10.154.0.2:8443"
I1018 13:58:40.662790 387939 api_server.go:166] Checking apiserver status ...
I1018 13:58:40.662820 387939 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1018 13:58:40.662624 387939 kubeconfig.go:125] found "minikube" server: "https://10.154.0.2:8443"
I1018 13:58:40.663003 387939 api_server.go:166] Checking apiserver status ...
I1018 13:58:40.662664 387939 addons.go:69] Setting storage-provisioner=true in profile "minikube"
I1018 13:58:40.663248 387939 kubeconfig.go:125] found "minikube" server: "https://10.154.0.2:8443"
I1018 13:58:40.663264 387939 api_server.go:166] Checking apiserver status ...
I1018 13:58:40.663299 387939 kubeconfig.go:125] found "minikube" server: "https://10.154.0.2:8443"
I1018 13:58:40.663337 387939 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1018 13:58:40.663344 387939 api_server.go:166] Checking apiserver status ...
I1018 13:58:40.663359 387939 kubeconfig.go:125] found "minikube" server: "https://10.154.0.2:8443"
I1018 13:58:40.663373 387939 api_server.go:166] Checking apiserver status ...
I1018 13:58:40.663384 387939 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1018 13:58:40.663401 387939 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1018 13:58:40.663412 387939 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1018 13:58:40.662659 387939 host.go:66] Checking if "minikube" exists ...
I1018 13:58:40.662679 387939 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1018 13:58:40.662649 387939 kubeconfig.go:125] found "minikube" server: "https://10.154.0.2:8443"
I1018 13:58:40.663785 387939 api_server.go:166] Checking apiserver status ...
I1018 13:58:40.663834 387939 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1018 13:58:40.662649 387939 addons.go:238] Setting addon nvidia-device-plugin=true in "minikube"
I1018 13:58:40.663321 387939 kubeconfig.go:125] found "minikube" server: "https://10.154.0.2:8443"
I1018 13:58:40.664093 387939 api_server.go:166] Checking apiserver status ...
I1018 13:58:40.664135 387939 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1018 13:58:40.663249 387939 addons.go:238] Setting addon storage-provisioner=true in "minikube"
I1018 13:58:40.664189 387939 host.go:66] Checking if "minikube" exists ...
I1018 13:58:40.662709 387939 host.go:66] Checking if "minikube" exists ...
I1018 13:58:40.662629 387939 api_server.go:166] Checking apiserver status ...
I1018 13:58:40.664389 387939 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1018 13:58:40.664001 387939 host.go:66] Checking if "minikube" exists ...
I1018 13:58:40.664948 387939 out.go:179] * Configuring local host environment ...
W1018 13:58:40.666530 387939 out.go:285] *
W1018 13:58:40.666561 387939 out.go:285] ! The 'none' driver is designed for experts who need to integrate with an existing VM
W1018 13:58:40.666569 387939 out.go:285] * Most users should use the newer 'docker' driver instead, which does not require root!
W1018 13:58:40.666578 387939 out.go:285] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
W1018 13:58:40.666582 387939 out.go:285] *
W1018 13:58:40.666615 387939 out.go:285] ! kubectl and minikube configuration will be stored in /home/jenkins
W1018 13:58:40.666625 387939 out.go:285] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
W1018 13:58:40.666629 387939 out.go:285] *
W1018 13:58:40.666647 387939 out.go:285] - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
W1018 13:58:40.666656 387939 out.go:285] - sudo chown -R $USER $HOME/.kube $HOME/.minikube
W1018 13:58:40.666666 387939 out.go:285] *
W1018 13:58:40.666675 387939 out.go:285] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
I1018 13:58:40.666702 387939 start.go:235] Will wait 6m0s for node &{Name: IP:10.154.0.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I1018 13:58:40.669020 387939 out.go:179] * Verifying Kubernetes components...
I1018 13:58:40.670460 387939 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1018 13:58:40.671357 387939 kubeconfig.go:125] found "minikube" server: "https://10.154.0.2:8443"
I1018 13:58:40.671384 387939 api_server.go:166] Checking apiserver status ...
I1018 13:58:40.671420 387939 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1018 13:58:40.671441 387939 kubeconfig.go:125] found "minikube" server: "https://10.154.0.2:8443"
I1018 13:58:40.671460 387939 api_server.go:166] Checking apiserver status ...
I1018 13:58:40.671496 387939 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1018 13:58:40.676881 387939 kubeconfig.go:125] found "minikube" server: "https://10.154.0.2:8443"
I1018 13:58:40.677112 387939 api_server.go:166] Checking apiserver status ...
I1018 13:58:40.677192 387939 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1018 13:58:40.677956 387939 kubeconfig.go:125] found "minikube" server: "https://10.154.0.2:8443"
I1018 13:58:40.677981 387939 api_server.go:166] Checking apiserver status ...
I1018 13:58:40.678021 387939 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1018 13:58:40.698603 387939 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup
I1018 13:58:40.701695 387939 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup
I1018 13:58:40.713458 387939 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup
I1018 13:58:40.719449 387939 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup
I1018 13:58:40.719603 387939 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup
I1018 13:58:40.720978 387939 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup
I1018 13:58:40.722673 387939 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup
I1018 13:58:40.725434 387939 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup
I1018 13:58:40.732250 387939 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup
W1018 13:58:40.732421 387939 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup: exit status 1
stdout:
stderr:
I1018 13:58:40.732465 387939 exec_runner.go:51] Run: ls
I1018 13:58:40.732599 387939 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup
W1018 13:58:40.734434 387939 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup: exit status 1
stdout:
stderr:
I1018 13:58:40.734542 387939 exec_runner.go:51] Run: ls
I1018 13:58:40.734958 387939 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup
I1018 13:58:40.739445 387939 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup
I1018 13:58:40.740616 387939 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup
I1018 13:58:40.741135 387939 api_server.go:253] Checking apiserver healthz at https://10.154.0.2:8443/healthz ...
I1018 13:58:40.745260 387939 api_server.go:253] Checking apiserver healthz at https://10.154.0.2:8443/healthz ...
I1018 13:58:40.745319 387939 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup
W1018 13:58:40.762064 387939 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup: exit status 1
stdout:
stderr:
I1018 13:58:40.762129 387939 exec_runner.go:51] Run: ls
W1018 13:58:40.763180 387939 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup: exit status 1
stdout:
stderr:
I1018 13:58:40.763229 387939 exec_runner.go:51] Run: ls
I1018 13:58:40.763236 387939 api_server.go:279] https://10.154.0.2:8443/healthz returned 200:
ok
W1018 13:58:40.769384 387939 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup: exit status 1
stdout:
stderr:
I1018 13:58:40.769446 387939 exec_runner.go:51] Run: ls
I1018 13:58:40.771591 387939 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1018 13:58:40.772978 387939 api_server.go:279] https://10.154.0.2:8443/healthz returned 200:
ok
I1018 13:58:40.773470 387939 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
I1018 13:58:40.773518 387939 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1018 13:58:40.774127 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube617895798 /etc/kubernetes/addons/yakd-ns.yaml
I1018 13:58:40.775440 387939 api_server.go:253] Checking apiserver healthz at https://10.154.0.2:8443/healthz ...
W1018 13:58:40.774499 387939 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup: exit status 1
stdout:
stderr:
I1018 13:58:40.775840 387939 exec_runner.go:51] Run: ls
W1018 13:58:40.778311 387939 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup: exit status 1
stdout:
stderr:
I1018 13:58:40.778428 387939 exec_runner.go:51] Run: ls
I1018 13:58:40.778442 387939 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1018 13:58:40.778955 387939 api_server.go:253] Checking apiserver healthz at https://10.154.0.2:8443/healthz ...
I1018 13:58:40.779745 387939 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1018 13:58:40.779785 387939 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1018 13:58:40.779941 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2943945992 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1018 13:58:40.780590 387939 api_server.go:253] Checking apiserver healthz at https://10.154.0.2:8443/healthz ...
I1018 13:58:40.785483 387939 api_server.go:279] https://10.154.0.2:8443/healthz returned 200:
ok
I1018 13:58:40.785738 387939 api_server.go:253] Checking apiserver healthz at https://10.154.0.2:8443/healthz ...
W1018 13:58:40.786494 387939 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup: exit status 1
stdout:
stderr:
I1018 13:58:40.786553 387939 exec_runner.go:51] Run: ls
W1018 13:58:40.787058 387939 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup: exit status 1
stdout:
stderr:
I1018 13:58:40.787105 387939 exec_runner.go:51] Run: ls
I1018 13:58:40.788911 387939 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1018 13:58:40.790218 387939 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1018 13:58:40.790286 387939 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1018 13:58:40.790467 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube402034936 /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
W1018 13:58:40.790628 387939 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup: exit status 1
stdout:
stderr:
I1018 13:58:40.790810 387939 exec_runner.go:51] Run: ls
I1018 13:58:40.791050 387939 api_server.go:279] https://10.154.0.2:8443/healthz returned 200:
ok
W1018 13:58:40.792247 387939 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup: exit status 1
stdout:
stderr:
I1018 13:58:40.792664 387939 api_server.go:253] Checking apiserver healthz at https://10.154.0.2:8443/healthz ...
I1018 13:58:40.792694 387939 api_server.go:253] Checking apiserver healthz at https://10.154.0.2:8443/healthz ...
I1018 13:58:40.795408 387939 api_server.go:253] Checking apiserver healthz at https://10.154.0.2:8443/healthz ...
I1018 13:58:40.792787 387939 api_server.go:279] https://10.154.0.2:8443/healthz returned 200:
ok
I1018 13:58:40.797408 387939 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1018 13:58:40.798011 387939 api_server.go:253] Checking apiserver healthz at https://10.154.0.2:8443/healthz ...
I1018 13:58:40.799222 387939 exec_runner.go:51] Run: ls
W1018 13:58:40.801336 387939 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup: exit status 1
stdout:
stderr:
I1018 13:58:40.801387 387939 exec_runner.go:51] Run: ls
W1018 13:58:40.802129 387939 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup: exit status 1
stdout:
stderr:
I1018 13:58:40.802176 387939 exec_runner.go:51] Run: ls
I1018 13:58:40.803104 387939 api_server.go:253] Checking apiserver healthz at https://10.154.0.2:8443/healthz ...
I1018 13:58:40.803498 387939 api_server.go:253] Checking apiserver healthz at https://10.154.0.2:8443/healthz ...
I1018 13:58:40.803964 387939 api_server.go:279] https://10.154.0.2:8443/healthz returned 200:
ok
I1018 13:58:40.794236 387939 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
W1018 13:58:40.804568 387939 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup: exit status 1
stdout:
stderr:
I1018 13:58:40.804659 387939 exec_runner.go:51] Run: ls
I1018 13:58:40.805680 387939 addons.go:238] Setting addon default-storageclass=true in "minikube"
I1018 13:58:40.805724 387939 host.go:66] Checking if "minikube" exists ...
I1018 13:58:40.806375 387939 kubeconfig.go:125] found "minikube" server: "https://10.154.0.2:8443"
I1018 13:58:40.806391 387939 api_server.go:166] Checking apiserver status ...
I1018 13:58:40.806428 387939 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1018 13:58:40.806668 387939 api_server.go:253] Checking apiserver healthz at https://10.154.0.2:8443/healthz ...
I1018 13:58:40.806963 387939 out.go:179] - Using image docker.io/registry:3.0.0
I1018 13:58:40.807192 387939 api_server.go:253] Checking apiserver healthz at https://10.154.0.2:8443/healthz ...
I1018 13:58:40.808405 387939 api_server.go:279] https://10.154.0.2:8443/healthz returned 200:
ok
I1018 13:58:40.808824 387939 api_server.go:279] https://10.154.0.2:8443/healthz returned 200:
ok
I1018 13:58:40.809471 387939 api_server.go:279] https://10.154.0.2:8443/healthz returned 200:
ok
I1018 13:58:40.809494 387939 host.go:66] Checking if "minikube" exists ...
I1018 13:58:40.809404 387939 api_server.go:279] https://10.154.0.2:8443/healthz returned 200:
ok
I1018 13:58:40.809413 387939 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
I1018 13:58:40.809879 387939 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1018 13:58:40.810038 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1875312359 /etc/kubernetes/addons/registry-rc.yaml
I1018 13:58:40.810749 387939 api_server.go:279] https://10.154.0.2:8443/healthz returned 200:
ok
I1018 13:58:40.811066 387939 api_server.go:279] https://10.154.0.2:8443/healthz returned 200:
ok
I1018 13:58:40.812048 387939 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
I1018 13:58:40.812187 387939 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1018 13:58:40.812459 387939 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
I1018 13:58:40.813087 387939 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1018 13:58:40.813586 387939 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1018 13:58:40.814033 387939 api_server.go:279] https://10.154.0.2:8443/healthz returned 200:
ok
I1018 13:58:40.815145 387939 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1018 13:58:40.815177 387939 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1018 13:58:40.815369 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3182962134 /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1018 13:58:40.815564 387939 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1018 13:58:40.815587 387939 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1018 13:58:40.815702 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1644665686 /etc/kubernetes/addons/metrics-apiservice.yaml
I1018 13:58:40.815876 387939 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1018 13:58:40.815891 387939 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
I1018 13:58:40.815898 387939 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
I1018 13:58:40.815933 387939 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
I1018 13:58:40.816087 387939 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
I1018 13:58:40.816222 387939 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1018 13:58:40.816254 387939 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
I1018 13:58:40.816272 387939 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
I1018 13:58:40.816434 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3921725032 /etc/kubernetes/addons/ig-crd.yaml
I1018 13:58:40.817006 387939 api_server.go:279] https://10.154.0.2:8443/healthz returned 200:
ok
I1018 13:58:40.817309 387939 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1018 13:58:40.818349 387939 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
I1018 13:58:40.818377 387939 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1018 13:58:40.818521 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3107954507 /etc/kubernetes/addons/deployment.yaml
I1018 13:58:40.818789 387939 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1018 13:58:40.818812 387939 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1018 13:58:40.818950 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube81256162 /etc/kubernetes/addons/registry-creds-rc.yaml
I1018 13:58:40.819132 387939 out.go:179] - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
I1018 13:58:40.822322 387939 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1018 13:58:40.824187 387939 out.go:179] - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
I1018 13:58:40.826380 387939 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
I1018 13:58:40.826419 387939 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1018 13:58:40.826598 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2344559123 /etc/kubernetes/addons/yakd-sa.yaml
I1018 13:58:40.829147 387939 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1018 13:58:40.829328 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1018 13:58:40.830166 387939 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1018 13:58:40.830195 387939 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1018 13:58:40.830434 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4246256411 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1018 13:58:40.830472 387939 out.go:179] - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
I1018 13:58:40.832014 387939 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1018 13:58:40.841481 387939 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup
I1018 13:58:40.841610 387939 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1018 13:58:40.843475 387939 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
I1018 13:58:40.843511 387939 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1018 13:58:40.843682 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2303878036 /etc/kubernetes/addons/registry-svc.yaml
I1018 13:58:40.845172 387939 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
I1018 13:58:40.845419 387939 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1018 13:58:40.845551 387939 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
I1018 13:58:40.847037 387939 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1018 13:58:40.847071 387939 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1018 13:58:40.847220 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1305547312 /etc/kubernetes/addons/rbac-external-attacher.yaml
I1018 13:58:40.847406 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1389054065 /etc/kubernetes/addons/volcano-deployment.yaml
I1018 13:58:40.847844 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1018 13:58:40.855528 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1018 13:58:40.858559 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1018 13:58:40.860208 387939 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
I1018 13:58:40.860255 387939 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1018 13:58:40.860480 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3589757770 /etc/kubernetes/addons/yakd-crb.yaml
I1018 13:58:40.860540 387939 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1018 13:58:40.860583 387939 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1018 13:58:40.860617 387939 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1018 13:58:40.860777 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1526467519 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1018 13:58:40.862203 387939 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1018 13:58:40.862255 387939 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1018 13:58:40.862436 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1698636829 /etc/kubernetes/addons/metrics-server-deployment.yaml
I1018 13:58:40.863742 387939 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
I1018 13:58:40.863776 387939 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1018 13:58:40.863946 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2713786157 /etc/kubernetes/addons/ig-deployment.yaml
W1018 13:58:40.872081 387939 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup: exit status 1
stdout:
stderr:
I1018 13:58:40.872200 387939 exec_runner.go:51] Run: ls
I1018 13:58:40.874907 387939 api_server.go:253] Checking apiserver healthz at https://10.154.0.2:8443/healthz ...
I1018 13:58:40.874996 387939 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1018 13:58:40.880959 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I1018 13:58:40.881222 387939 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
I1018 13:58:40.881247 387939 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1018 13:58:40.881425 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube768239951 /etc/kubernetes/addons/registry-proxy.yaml
I1018 13:58:40.881633 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube993356673 /etc/kubernetes/addons/storage-provisioner.yaml
I1018 13:58:40.884026 387939 api_server.go:279] https://10.154.0.2:8443/healthz returned 200:
ok
I1018 13:58:40.884092 387939 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I1018 13:58:40.884110 387939 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
I1018 13:58:40.884116 387939 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
I1018 13:58:40.884155 387939 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
I1018 13:58:40.886468 387939 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1018 13:58:40.886498 387939 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1018 13:58:40.886634 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1293381149 /etc/kubernetes/addons/metrics-server-rbac.yaml
I1018 13:58:40.897441 387939 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
I1018 13:58:40.897574 387939 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1018 13:58:40.897666 387939 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1018 13:58:40.897761 387939 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1018 13:58:40.898406 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3748685651 /etc/kubernetes/addons/yakd-svc.yaml
I1018 13:58:40.898534 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1531118554 /etc/kubernetes/addons/rbac-hostpath.yaml
I1018 13:58:40.904755 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1018 13:58:40.908153 387939 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1018 13:58:40.908199 387939 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1018 13:58:40.908427 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3005313991 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1018 13:58:40.920914 387939 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1018 13:58:40.920962 387939 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1018 13:58:40.921165 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1050151305 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1018 13:58:40.928210 387939 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1018 13:58:40.928259 387939 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1018 13:58:40.928419 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1788306083 /etc/kubernetes/addons/metrics-server-service.yaml
I1018 13:58:40.928710 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1018 13:58:40.928969 387939 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
I1018 13:58:40.928990 387939 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1018 13:58:40.929102 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2208699821 /etc/kubernetes/addons/yakd-dp.yaml
I1018 13:58:40.935713 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1018 13:58:40.941373 387939 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1018 13:58:40.941999 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4038419665 /etc/kubernetes/addons/storageclass.yaml
I1018 13:58:40.966712 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1018 13:58:40.967677 387939 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1018 13:58:40.967724 387939 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1018 13:58:40.967871 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube861956710 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1018 13:58:40.969602 387939 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1018 13:58:40.969636 387939 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1018 13:58:40.969783 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube924870680 /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1018 13:58:40.975901 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1018 13:58:40.999734 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1018 13:58:41.023133 387939 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1018 13:58:41.023268 387939 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1018 13:58:41.023785 387939 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1018 13:58:41.023820 387939 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1018 13:58:41.024001 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2710352228 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1018 13:58:41.024680 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2192492169 /etc/kubernetes/addons/rbac-external-resizer.yaml
I1018 13:58:41.086705 387939 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1018 13:58:41.086762 387939 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1018 13:58:41.086953 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2004734207 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1018 13:58:41.097154 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1018 13:58:41.109268 387939 exec_runner.go:51] Run: sudo systemctl start kubelet
I1018 13:58:41.147892 387939 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1018 13:58:41.147946 387939 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1018 13:58:41.149115 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3785477138 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1018 13:58:41.210436 387939 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-6" to be "Ready" ...
I1018 13:58:41.212903 387939 node_ready.go:49] node "ubuntu-20-agent-6" is "Ready"
I1018 13:58:41.212982 387939 node_ready.go:38] duration metric: took 2.492663ms for node "ubuntu-20-agent-6" to be "Ready" ...
I1018 13:58:41.213031 387939 api_server.go:52] waiting for apiserver process to appear ...
I1018 13:58:41.213081 387939 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1018 13:58:41.230162 387939 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1018 13:58:41.230293 387939 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1018 13:58:41.230990 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1658086658 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1018 13:58:41.275562 387939 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1018 13:58:41.275618 387939 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1018 13:58:41.275793 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube98507459 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1018 13:58:41.306176 387939 api_server.go:72] duration metric: took 639.439633ms to wait for apiserver process to appear ...
I1018 13:58:41.306204 387939 api_server.go:88] waiting for apiserver healthz status ...
I1018 13:58:41.306281 387939 api_server.go:253] Checking apiserver healthz at https://10.154.0.2:8443/healthz ...
I1018 13:58:41.313786 387939 api_server.go:279] https://10.154.0.2:8443/healthz returned 200:
ok
I1018 13:58:41.337798 387939 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1018 13:58:41.337861 387939 api_server.go:141] control plane version: v1.34.1
I1018 13:58:41.338486 387939 api_server.go:131] duration metric: took 32.21668ms to wait for apiserver health ...
I1018 13:58:41.338501 387939 system_pods.go:43] waiting for kube-system pods to appear ...
I1018 13:58:41.338878 387939 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1018 13:58:41.339867 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3671355553 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1018 13:58:41.351065 387939 system_pods.go:59] 9 kube-system pods found
I1018 13:58:41.351250 387939 system_pods.go:61] "amd-gpu-device-plugin-jtlbj" [3ce6c74f-f83e-4a6f-b069-a78ff31feef1] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1018 13:58:41.351268 387939 system_pods.go:61] "coredns-66bc5c9577-ppk98" [20042ef8-8d1b-4d0a-a776-6faa6b15203c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1018 13:58:41.351282 387939 system_pods.go:61] "coredns-66bc5c9577-wlt7s" [d52612ea-bba6-4dfc-94d0-da1329596be2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1018 13:58:41.354442 387939 system_pods.go:61] "etcd-ubuntu-20-agent-6" [f3251dd1-c323-498a-a69c-e6991bf21509] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1018 13:58:41.354470 387939 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-6" [bf3edaf8-f110-424d-a0a7-136d7479870a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1018 13:58:41.354495 387939 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-6" [61fd20fd-23f9-4b86-b5b6-a295dd6b720f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1018 13:58:41.354507 387939 system_pods.go:61] "kube-proxy-bmfv9" [ea2e1942-c7ce-4fa2-bdd6-79e24903ab67] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1018 13:58:41.354518 387939 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-6" [ba87b078-7ada-4b34-b92d-1ab18393b2e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1018 13:58:41.354527 387939 system_pods.go:61] "registry-creds-764b6fb674-4h5dd" [2eaa7830-2c9b-4d0a-b54e-3d6b21274de9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1018 13:58:41.354536 387939 system_pods.go:74] duration metric: took 16.027034ms to wait for pod list to return data ...
I1018 13:58:41.354549 387939 default_sa.go:34] waiting for default service account to be created ...
I1018 13:58:41.359289 387939 default_sa.go:45] found service account: "default"
I1018 13:58:41.359326 387939 default_sa.go:55] duration metric: took 4.768885ms for default service account to be created ...
I1018 13:58:41.359337 387939 system_pods.go:116] waiting for k8s-apps to be running ...
I1018 13:58:41.365582 387939 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1018 13:58:41.365674 387939 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1018 13:58:41.365834 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube606638554 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1018 13:58:41.368693 387939 system_pods.go:86] 9 kube-system pods found
I1018 13:58:41.368840 387939 system_pods.go:89] "amd-gpu-device-plugin-jtlbj" [3ce6c74f-f83e-4a6f-b069-a78ff31feef1] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1018 13:58:41.368858 387939 system_pods.go:89] "coredns-66bc5c9577-ppk98" [20042ef8-8d1b-4d0a-a776-6faa6b15203c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1018 13:58:41.368987 387939 system_pods.go:89] "coredns-66bc5c9577-wlt7s" [d52612ea-bba6-4dfc-94d0-da1329596be2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1018 13:58:41.369006 387939 system_pods.go:89] "etcd-ubuntu-20-agent-6" [f3251dd1-c323-498a-a69c-e6991bf21509] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1018 13:58:41.369016 387939 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-6" [bf3edaf8-f110-424d-a0a7-136d7479870a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1018 13:58:41.369027 387939 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-6" [61fd20fd-23f9-4b86-b5b6-a295dd6b720f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1018 13:58:41.369042 387939 system_pods.go:89] "kube-proxy-bmfv9" [ea2e1942-c7ce-4fa2-bdd6-79e24903ab67] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1018 13:58:41.369162 387939 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-6" [ba87b078-7ada-4b34-b92d-1ab18393b2e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1018 13:58:41.369179 387939 system_pods.go:89] "registry-creds-764b6fb674-4h5dd" [2eaa7830-2c9b-4d0a-b54e-3d6b21274de9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1018 13:58:41.369209 387939 retry.go:31] will retry after 309.67405ms: missing components: kube-dns, kube-proxy
I1018 13:58:41.392645 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1018 13:58:41.420458 387939 start.go:976] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
I1018 13:58:41.694890 387939 system_pods.go:86] 10 kube-system pods found
I1018 13:58:41.694940 387939 system_pods.go:89] "amd-gpu-device-plugin-jtlbj" [3ce6c74f-f83e-4a6f-b069-a78ff31feef1] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1018 13:58:41.694951 387939 system_pods.go:89] "coredns-66bc5c9577-ppk98" [20042ef8-8d1b-4d0a-a776-6faa6b15203c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1018 13:58:41.694962 387939 system_pods.go:89] "coredns-66bc5c9577-wlt7s" [d52612ea-bba6-4dfc-94d0-da1329596be2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1018 13:58:41.694973 387939 system_pods.go:89] "etcd-ubuntu-20-agent-6" [f3251dd1-c323-498a-a69c-e6991bf21509] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1018 13:58:41.694983 387939 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-6" [bf3edaf8-f110-424d-a0a7-136d7479870a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1018 13:58:41.694999 387939 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-6" [61fd20fd-23f9-4b86-b5b6-a295dd6b720f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1018 13:58:41.695008 387939 system_pods.go:89] "kube-proxy-bmfv9" [ea2e1942-c7ce-4fa2-bdd6-79e24903ab67] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1018 13:58:41.695020 387939 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-6" [ba87b078-7ada-4b34-b92d-1ab18393b2e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1018 13:58:41.695029 387939 system_pods.go:89] "nvidia-device-plugin-daemonset-6ckqn" [bb659b60-a9b1-4cf3-9e07-84b0ce8c3caa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1018 13:58:41.695038 387939 system_pods.go:89] "registry-creds-764b6fb674-4h5dd" [2eaa7830-2c9b-4d0a-b54e-3d6b21274de9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1018 13:58:41.695059 387939 retry.go:31] will retry after 389.979383ms: missing components: kube-dns, kube-proxy
I1018 13:58:41.818879 387939 addons.go:479] Verifying addon registry=true in "minikube"
I1018 13:58:41.828389 387939 out.go:179] * Verifying registry addon...
I1018 13:58:41.831832 387939 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1018 13:58:41.835885 387939 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1018 13:58:41.835910 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:41.938757 387939 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
I1018 13:58:42.003111 387939 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.027093095s)
I1018 13:58:42.005280 387939 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube service yakd-dashboard -n yakd-dashboard
I1018 13:58:42.100368 387939 system_pods.go:86] 13 kube-system pods found
I1018 13:58:42.100414 387939 system_pods.go:89] "amd-gpu-device-plugin-jtlbj" [3ce6c74f-f83e-4a6f-b069-a78ff31feef1] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1018 13:58:42.100425 387939 system_pods.go:89] "coredns-66bc5c9577-ppk98" [20042ef8-8d1b-4d0a-a776-6faa6b15203c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1018 13:58:42.100435 387939 system_pods.go:89] "coredns-66bc5c9577-wlt7s" [d52612ea-bba6-4dfc-94d0-da1329596be2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1018 13:58:42.100444 387939 system_pods.go:89] "etcd-ubuntu-20-agent-6" [f3251dd1-c323-498a-a69c-e6991bf21509] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1018 13:58:42.100454 387939 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-6" [bf3edaf8-f110-424d-a0a7-136d7479870a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1018 13:58:42.100464 387939 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-6" [61fd20fd-23f9-4b86-b5b6-a295dd6b720f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1018 13:58:42.100473 387939 system_pods.go:89] "kube-proxy-bmfv9" [ea2e1942-c7ce-4fa2-bdd6-79e24903ab67] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1018 13:58:42.100483 387939 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-6" [ba87b078-7ada-4b34-b92d-1ab18393b2e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1018 13:58:42.100490 387939 system_pods.go:89] "metrics-server-85b7d694d7-bw42j" [b9ec9d8a-831d-453e-89f6-f9c0101ac096] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1018 13:58:42.100512 387939 system_pods.go:89] "nvidia-device-plugin-daemonset-6ckqn" [bb659b60-a9b1-4cf3-9e07-84b0ce8c3caa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1018 13:58:42.100520 387939 system_pods.go:89] "registry-6b586f9694-9q9zx" [d96c776c-3fb5-496d-8c3f-5013222660bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1018 13:58:42.100527 387939 system_pods.go:89] "registry-creds-764b6fb674-4h5dd" [2eaa7830-2c9b-4d0a-b54e-3d6b21274de9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1018 13:58:42.100535 387939 system_pods.go:89] "registry-proxy-pz778" [629a5cbd-c8af-4cfb-8f99-8883ff733e93] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1018 13:58:42.100558 387939 retry.go:31] will retry after 397.635527ms: missing components: kube-dns, kube-proxy
I1018 13:58:42.127292 387939 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.19152885s)
I1018 13:58:42.154847 387939 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.188071923s)
I1018 13:58:42.154927 387939 addons.go:479] Verifying addon metrics-server=true in "minikube"
I1018 13:58:42.154946 387939 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.249891588s)
W1018 13:58:42.154995 387939 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget created
serviceaccount/gadget created
configmap/gadget created
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
role.rbac.authorization.k8s.io/gadget-role created
rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
daemonset.apps/gadget created
stderr:
Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:58:42.155019 387939 retry.go:31] will retry after 358.295628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget created
serviceaccount/gadget created
configmap/gadget created
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
role.rbac.authorization.k8s.io/gadget-role created
rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
daemonset.apps/gadget created
stderr:
Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:58:42.340577 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:42.506654 387939 system_pods.go:86] 14 kube-system pods found
I1018 13:58:42.506703 387939 system_pods.go:89] "amd-gpu-device-plugin-jtlbj" [3ce6c74f-f83e-4a6f-b069-a78ff31feef1] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1018 13:58:42.506714 387939 system_pods.go:89] "coredns-66bc5c9577-ppk98" [20042ef8-8d1b-4d0a-a776-6faa6b15203c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1018 13:58:42.506722 387939 system_pods.go:89] "coredns-66bc5c9577-wlt7s" [d52612ea-bba6-4dfc-94d0-da1329596be2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1018 13:58:42.506729 387939 system_pods.go:89] "etcd-ubuntu-20-agent-6" [f3251dd1-c323-498a-a69c-e6991bf21509] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1018 13:58:42.506741 387939 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-6" [bf3edaf8-f110-424d-a0a7-136d7479870a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1018 13:58:42.506750 387939 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-6" [61fd20fd-23f9-4b86-b5b6-a295dd6b720f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1018 13:58:42.506760 387939 system_pods.go:89] "kube-proxy-bmfv9" [ea2e1942-c7ce-4fa2-bdd6-79e24903ab67] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1018 13:58:42.506768 387939 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-6" [ba87b078-7ada-4b34-b92d-1ab18393b2e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1018 13:58:42.506786 387939 system_pods.go:89] "metrics-server-85b7d694d7-bw42j" [b9ec9d8a-831d-453e-89f6-f9c0101ac096] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1018 13:58:42.506793 387939 system_pods.go:89] "nvidia-device-plugin-daemonset-6ckqn" [bb659b60-a9b1-4cf3-9e07-84b0ce8c3caa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1018 13:58:42.506800 387939 system_pods.go:89] "registry-6b586f9694-9q9zx" [d96c776c-3fb5-496d-8c3f-5013222660bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1018 13:58:42.506807 387939 system_pods.go:89] "registry-creds-764b6fb674-4h5dd" [2eaa7830-2c9b-4d0a-b54e-3d6b21274de9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1018 13:58:42.506814 387939 system_pods.go:89] "registry-proxy-pz778" [629a5cbd-c8af-4cfb-8f99-8883ff733e93] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1018 13:58:42.506820 387939 system_pods.go:89] "storage-provisioner" [48d670c9-0d3b-4ed7-b863-b1fc5d463c11] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1018 13:58:42.506844 387939 retry.go:31] will retry after 528.07326ms: missing components: kube-dns, kube-proxy
I1018 13:58:42.514216 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1018 13:58:42.839279 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:43.021465 387939 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.628720743s)
I1018 13:58:43.021546 387939 addons.go:479] Verifying addon csi-hostpath-driver=true in "minikube"
I1018 13:58:43.029442 387939 out.go:179] * Verifying csi-hostpath-driver addon...
I1018 13:58:43.041090 387939 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1018 13:58:43.066791 387939 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1018 13:58:43.066830 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:43.072426 387939 system_pods.go:86] 17 kube-system pods found
I1018 13:58:43.072473 387939 system_pods.go:89] "amd-gpu-device-plugin-jtlbj" [3ce6c74f-f83e-4a6f-b069-a78ff31feef1] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1018 13:58:43.072485 387939 system_pods.go:89] "coredns-66bc5c9577-ppk98" [20042ef8-8d1b-4d0a-a776-6faa6b15203c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1018 13:58:43.072498 387939 system_pods.go:89] "coredns-66bc5c9577-wlt7s" [d52612ea-bba6-4dfc-94d0-da1329596be2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1018 13:58:43.072509 387939 system_pods.go:89] "csi-hostpath-attacher-0" [2c1a9629-a5a0-440b-a595-7d0bb2781794] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1018 13:58:43.072515 387939 system_pods.go:89] "csi-hostpath-resizer-0" [3a50f785-b484-4d10-b925-934c5a6688f2] Pending
I1018 13:58:43.072525 387939 system_pods.go:89] "csi-hostpathplugin-wb4bj" [86eacced-d677-4c00-9ef9-790f159d09ed] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1018 13:58:43.072533 387939 system_pods.go:89] "etcd-ubuntu-20-agent-6" [f3251dd1-c323-498a-a69c-e6991bf21509] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1018 13:58:43.073481 387939 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-6" [bf3edaf8-f110-424d-a0a7-136d7479870a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1018 13:58:43.073521 387939 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-6" [61fd20fd-23f9-4b86-b5b6-a295dd6b720f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1018 13:58:43.073532 387939 system_pods.go:89] "kube-proxy-bmfv9" [ea2e1942-c7ce-4fa2-bdd6-79e24903ab67] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1018 13:58:43.073541 387939 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-6" [ba87b078-7ada-4b34-b92d-1ab18393b2e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1018 13:58:43.073550 387939 system_pods.go:89] "metrics-server-85b7d694d7-bw42j" [b9ec9d8a-831d-453e-89f6-f9c0101ac096] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1018 13:58:43.073559 387939 system_pods.go:89] "nvidia-device-plugin-daemonset-6ckqn" [bb659b60-a9b1-4cf3-9e07-84b0ce8c3caa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1018 13:58:43.073568 387939 system_pods.go:89] "registry-6b586f9694-9q9zx" [d96c776c-3fb5-496d-8c3f-5013222660bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1018 13:58:43.073630 387939 system_pods.go:89] "registry-creds-764b6fb674-4h5dd" [2eaa7830-2c9b-4d0a-b54e-3d6b21274de9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1018 13:58:43.073645 387939 system_pods.go:89] "registry-proxy-pz778" [629a5cbd-c8af-4cfb-8f99-8883ff733e93] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1018 13:58:43.073653 387939 system_pods.go:89] "storage-provisioner" [48d670c9-0d3b-4ed7-b863-b1fc5d463c11] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1018 13:58:43.073676 387939 retry.go:31] will retry after 560.93383ms: missing components: kube-proxy
I1018 13:58:43.345869 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:43.364166 387939 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.266915117s)
W1018 13:58:43.364291 387939 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1018 13:58:43.366876 387939 retry.go:31] will retry after 149.715483ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1018 13:58:43.518109 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1018 13:58:43.555831 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:43.664064 387939 system_pods.go:86] 19 kube-system pods found
I1018 13:58:43.664183 387939 system_pods.go:89] "amd-gpu-device-plugin-jtlbj" [3ce6c74f-f83e-4a6f-b069-a78ff31feef1] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1018 13:58:43.664240 387939 system_pods.go:89] "coredns-66bc5c9577-ppk98" [20042ef8-8d1b-4d0a-a776-6faa6b15203c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1018 13:58:43.664264 387939 system_pods.go:89] "coredns-66bc5c9577-wlt7s" [d52612ea-bba6-4dfc-94d0-da1329596be2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1018 13:58:43.664326 387939 system_pods.go:89] "csi-hostpath-attacher-0" [2c1a9629-a5a0-440b-a595-7d0bb2781794] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1018 13:58:43.664348 387939 system_pods.go:89] "csi-hostpath-resizer-0" [3a50f785-b484-4d10-b925-934c5a6688f2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1018 13:58:43.664392 387939 system_pods.go:89] "csi-hostpathplugin-wb4bj" [86eacced-d677-4c00-9ef9-790f159d09ed] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1018 13:58:43.664403 387939 system_pods.go:89] "etcd-ubuntu-20-agent-6" [f3251dd1-c323-498a-a69c-e6991bf21509] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1018 13:58:43.664421 387939 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-6" [bf3edaf8-f110-424d-a0a7-136d7479870a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1018 13:58:43.664430 387939 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-6" [61fd20fd-23f9-4b86-b5b6-a295dd6b720f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1018 13:58:43.664471 387939 system_pods.go:89] "kube-proxy-bmfv9" [ea2e1942-c7ce-4fa2-bdd6-79e24903ab67] Running
I1018 13:58:43.664494 387939 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-6" [ba87b078-7ada-4b34-b92d-1ab18393b2e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1018 13:58:43.664514 387939 system_pods.go:89] "metrics-server-85b7d694d7-bw42j" [b9ec9d8a-831d-453e-89f6-f9c0101ac096] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1018 13:58:43.664796 387939 system_pods.go:89] "nvidia-device-plugin-daemonset-6ckqn" [bb659b60-a9b1-4cf3-9e07-84b0ce8c3caa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1018 13:58:43.664929 387939 system_pods.go:89] "registry-6b586f9694-9q9zx" [d96c776c-3fb5-496d-8c3f-5013222660bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1018 13:58:43.664941 387939 system_pods.go:89] "registry-creds-764b6fb674-4h5dd" [2eaa7830-2c9b-4d0a-b54e-3d6b21274de9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1018 13:58:43.665238 387939 system_pods.go:89] "registry-proxy-pz778" [629a5cbd-c8af-4cfb-8f99-8883ff733e93] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1018 13:58:43.665332 387939 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9vmbb" [d5568ed2-51bf-41b3-8f10-423246d70d33] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1018 13:58:43.665345 387939 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g4gfd" [b0296d57-4c97-468f-b15e-d53cddd19daa] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1018 13:58:43.665354 387939 system_pods.go:89] "storage-provisioner" [48d670c9-0d3b-4ed7-b863-b1fc5d463c11] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1018 13:58:43.665608 387939 system_pods.go:126] duration metric: took 2.306022162s to wait for k8s-apps to be running ...
I1018 13:58:43.665632 387939 system_svc.go:44] waiting for kubelet service to be running ....
I1018 13:58:43.666942 387939 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I1018 13:58:43.698583 387939 system_svc.go:56] duration metric: took 32.938272ms WaitForService to wait for kubelet
I1018 13:58:43.698638 387939 kubeadm.go:586] duration metric: took 3.031906439s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1018 13:58:43.698665 387939 node_conditions.go:102] verifying NodePressure condition ...
I1018 13:58:43.703490 387939 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1018 13:58:43.703528 387939 node_conditions.go:123] node cpu capacity is 8
I1018 13:58:43.703545 387939 node_conditions.go:105] duration metric: took 4.873131ms to run NodePressure ...
I1018 13:58:43.703560 387939 start.go:241] waiting for startup goroutines ...
I1018 13:58:43.838782 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:44.005887 387939 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.124879579s)
I1018 13:58:44.049559 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:44.282374 387939 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.768100532s)
W1018 13:58:44.282422 387939 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:58:44.282448 387939 retry.go:31] will retry after 520.822907ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:58:44.336689 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:44.545852 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:44.804374 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1018 13:58:44.835997 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:45.045072 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:45.335837 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
W1018 13:58:45.367999 387939 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:58:45.368034 387939 retry.go:31] will retry after 773.038082ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:58:45.545031 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:45.836199 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:46.044827 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:46.141924 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1018 13:58:46.335050 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:46.519744 387939 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.001587414s)
I1018 13:58:46.545487 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
W1018 13:58:46.731461 387939 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:58:46.731496 387939 retry.go:31] will retry after 792.29128ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:58:46.835688 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:47.045432 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:47.335716 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:47.524522 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1018 13:58:47.546084 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:47.836363 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:48.046295 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:48.216804 387939 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1018 13:58:48.216980 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube702392307 /var/lib/minikube/google_application_credentials.json
I1018 13:58:48.233958 387939 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1018 13:58:48.234114 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube424936780 /var/lib/minikube/google_cloud_project
I1018 13:58:48.251434 387939 addons.go:238] Setting addon gcp-auth=true in "minikube"
I1018 13:58:48.251506 387939 host.go:66] Checking if "minikube" exists ...
I1018 13:58:48.253294 387939 kubeconfig.go:125] found "minikube" server: "https://10.154.0.2:8443"
I1018 13:58:48.254419 387939 api_server.go:166] Checking apiserver status ...
I1018 13:58:48.254478 387939 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1018 13:58:48.280850 387939 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup
W1018 13:58:48.299822 387939 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/389367/cgroup: exit status 1
stdout:
stderr:
I1018 13:58:48.299892 387939 exec_runner.go:51] Run: ls
I1018 13:58:48.301684 387939 api_server.go:253] Checking apiserver healthz at https://10.154.0.2:8443/healthz ...
I1018 13:58:48.306983 387939 api_server.go:279] https://10.154.0.2:8443/healthz returned 200:
ok
I1018 13:58:48.307055 387939 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
I1018 13:58:48.313182 387939 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
I1018 13:58:48.314634 387939 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1018 13:58:48.315874 387939 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1018 13:58:48.315918 387939 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1018 13:58:48.316089 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2337751668 /etc/kubernetes/addons/gcp-auth-ns.yaml
I1018 13:58:48.333971 387939 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1018 13:58:48.334015 387939 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1018 13:58:48.334161 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube213198624 /etc/kubernetes/addons/gcp-auth-service.yaml
I1018 13:58:48.336460 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:48.351184 387939 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1018 13:58:48.351227 387939 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1018 13:58:48.351447 387939 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4048335241 /etc/kubernetes/addons/gcp-auth-webhook.yaml
W1018 13:58:48.360928 387939 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:58:48.360970 387939 retry.go:31] will retry after 1.219531503s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:58:48.366952 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1018 13:58:48.544862 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:48.779937 387939 addons.go:479] Verifying addon gcp-auth=true in "minikube"
I1018 13:58:48.784774 387939 out.go:179] * Verifying gcp-auth addon...
I1018 13:58:48.788193 387939 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1018 13:58:48.792619 387939 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1018 13:58:48.792644 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:48.836067 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:49.044889 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:49.292569 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:49.335543 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:49.559071 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:49.581489 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1018 13:58:49.803757 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:49.860032 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:50.045915 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:50.292168 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:50.392940 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
W1018 13:58:50.406728 387939 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:58:50.406784 387939 retry.go:31] will retry after 2.504271423s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:58:50.545025 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:50.792067 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:50.835547 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:51.044980 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:51.292045 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:51.335411 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:51.546213 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:51.791611 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:51.835741 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:52.046010 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:52.292294 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:52.335432 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:52.544717 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:52.791809 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:52.835964 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:52.912119 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1018 13:58:53.045252 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:53.291307 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:53.334424 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:53.545570 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
W1018 13:58:53.669073 387939 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:58:53.669111 387939 retry.go:31] will retry after 3.596367689s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:58:53.792368 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:53.835564 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:54.045321 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:54.291895 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:54.336204 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:54.544937 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:54.792364 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:54.835539 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:55.045158 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:55.292466 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:55.335578 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:55.544821 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:55.792169 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:55.835028 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:56.175292 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:56.312241 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:56.335146 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:56.544483 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:56.791489 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:56.835205 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:57.044476 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:57.266599 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1018 13:58:57.292201 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:57.336484 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:57.545064 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:57.791811 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:57.835601 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
W1018 13:58:57.848784 387939 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:58:57.848818 387939 retry.go:31] will retry after 2.8244712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:58:58.044895 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:58.292070 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:58.335434 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:58.544925 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:58.791636 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:58.835578 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:59.044695 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:59.291378 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:59.335380 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:58:59.544903 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:58:59.792173 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:58:59.835540 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:59:00.045213 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:00.292611 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:00.335618 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:59:00.620601 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:00.674482 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1018 13:59:00.792709 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:00.893492 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:59:01.044588 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
W1018 13:59:01.253343 387939 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:59:01.253378 387939 retry.go:31] will retry after 7.24115055s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:59:01.291477 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:01.335262 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:59:01.544367 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:01.792607 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:01.836032 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:59:02.045448 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:02.292001 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:02.334664 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:59:02.545598 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:02.791585 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:02.835441 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:59:03.044773 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:03.291936 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:03.335855 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:59:03.545999 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:03.792861 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:03.836118 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:59:04.044843 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:04.292956 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:04.335710 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:59:04.573628 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:04.791993 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:04.835115 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:59:05.044706 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:05.291700 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:05.335665 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:59:05.545153 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:05.792467 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:05.835765 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:59:06.045409 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:06.291887 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:06.392652 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:59:06.546785 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:06.792137 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:06.835430 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:59:07.058960 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:07.292240 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:07.335190 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1018 13:59:07.544500 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:07.792156 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:07.835093 387939 kapi.go:107] duration metric: took 26.003262174s to wait for kubernetes.io/minikube-addons=registry ...
I1018 13:59:08.045415 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:08.291560 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:08.494994 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1018 13:59:08.544644 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:08.791181 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:09.044407 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
W1018 13:59:09.102120 387939 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:59:09.102163 387939 retry.go:31] will retry after 11.899027457s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:59:09.291340 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:09.544968 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:09.792373 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:10.047179 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:10.291989 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:10.545087 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:10.791987 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:11.043858 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:11.292313 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:11.544869 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:11.793469 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:12.045597 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:12.291907 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:12.546728 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:12.792015 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:13.046165 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:13.292453 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:13.545346 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:13.791513 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:14.045158 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:14.292422 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:14.545448 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:14.791824 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:15.046513 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:15.291413 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:15.547182 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:15.792591 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:16.045452 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:16.292151 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:16.544856 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:16.792236 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:17.044752 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:17.291850 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:17.545958 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:17.791803 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:18.045750 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:18.292077 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:18.544324 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:18.791858 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:19.044944 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:19.291787 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:19.545719 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:19.791441 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:20.044976 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:20.291824 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:20.545677 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:20.792812 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:21.002336 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1018 13:59:21.052600 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:21.291427 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:21.544994 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
W1018 13:59:21.650208 387939 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:59:21.650245 387939 retry.go:31] will retry after 16.078214393s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:59:21.791552 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:22.045136 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:22.292156 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:22.544757 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:22.791424 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:23.044681 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1018 13:59:23.292248 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:23.545389 387939 kapi.go:107] duration metric: took 40.504299764s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1018 13:59:23.791488 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:24.292134 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:24.792099 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:25.291717 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:25.792200 387939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1018 13:59:26.291498 387939 kapi.go:107] duration metric: took 37.503306341s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1018 13:59:26.293259 387939 out.go:179] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
I1018 13:59:26.294675 387939 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1018 13:59:26.295950 387939 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1018 13:59:37.730547 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
W1018 13:59:38.296907 387939 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 13:59:38.296945 387939 retry.go:31] will retry after 31.101984851s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I1018 14:00:09.402465 387939 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
W1018 14:00:09.974714 387939 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
W1018 14:00:09.974864 387939 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: exit status 1
stdout:
namespace/gadget unchanged
serviceaccount/gadget unchanged
configmap/gadget unchanged
clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/gadget-role unchanged
rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
daemonset.apps/gadget configured
stderr:
error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
]
I1018 14:00:09.976874 387939 out.go:179] * Enabled addons: amd-gpu-device-plugin, registry-creds, nvidia-device-plugin, cloud-spanner, default-storageclass, yakd, storage-provisioner, metrics-server, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
I1018 14:00:09.978265 387939 addons.go:514] duration metric: took 1m29.316587875s for enable addons: enabled=[amd-gpu-device-plugin registry-creds nvidia-device-plugin cloud-spanner default-storageclass yakd storage-provisioner metrics-server volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
I1018 14:00:09.978332 387939 start.go:246] waiting for cluster config update ...
I1018 14:00:09.978353 387939 start.go:255] writing updated cluster config ...
I1018 14:00:09.978612 387939 exec_runner.go:51] Run: rm -f paused
I1018 14:00:09.979801 387939 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1018 14:00:09.983885 387939 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ppk98" in "kube-system" namespace to be "Ready" or be gone ...
I1018 14:00:09.988409 387939 pod_ready.go:94] pod "coredns-66bc5c9577-ppk98" is "Ready"
I1018 14:00:09.988432 387939 pod_ready.go:86] duration metric: took 4.526016ms for pod "coredns-66bc5c9577-ppk98" in "kube-system" namespace to be "Ready" or be gone ...
I1018 14:00:09.990468 387939 pod_ready.go:83] waiting for pod "etcd-ubuntu-20-agent-6" in "kube-system" namespace to be "Ready" or be gone ...
I1018 14:00:09.994063 387939 pod_ready.go:94] pod "etcd-ubuntu-20-agent-6" is "Ready"
I1018 14:00:09.994081 387939 pod_ready.go:86] duration metric: took 3.591864ms for pod "etcd-ubuntu-20-agent-6" in "kube-system" namespace to be "Ready" or be gone ...
I1018 14:00:09.995913 387939 pod_ready.go:83] waiting for pod "kube-apiserver-ubuntu-20-agent-6" in "kube-system" namespace to be "Ready" or be gone ...
I1018 14:00:09.999728 387939 pod_ready.go:94] pod "kube-apiserver-ubuntu-20-agent-6" is "Ready"
I1018 14:00:09.999748 387939 pod_ready.go:86] duration metric: took 3.816445ms for pod "kube-apiserver-ubuntu-20-agent-6" in "kube-system" namespace to be "Ready" or be gone ...
I1018 14:00:10.001541 387939 pod_ready.go:83] waiting for pod "kube-controller-manager-ubuntu-20-agent-6" in "kube-system" namespace to be "Ready" or be gone ...
I1018 14:00:10.384095 387939 pod_ready.go:94] pod "kube-controller-manager-ubuntu-20-agent-6" is "Ready"
I1018 14:00:10.384126 387939 pod_ready.go:86] duration metric: took 382.56598ms for pod "kube-controller-manager-ubuntu-20-agent-6" in "kube-system" namespace to be "Ready" or be gone ...
I1018 14:00:10.586783 387939 pod_ready.go:83] waiting for pod "kube-proxy-bmfv9" in "kube-system" namespace to be "Ready" or be gone ...
I1018 14:00:10.984434 387939 pod_ready.go:94] pod "kube-proxy-bmfv9" is "Ready"
I1018 14:00:10.984465 387939 pod_ready.go:86] duration metric: took 397.654974ms for pod "kube-proxy-bmfv9" in "kube-system" namespace to be "Ready" or be gone ...
I1018 14:00:11.184081 387939 pod_ready.go:83] waiting for pod "kube-scheduler-ubuntu-20-agent-6" in "kube-system" namespace to be "Ready" or be gone ...
I1018 14:00:11.584096 387939 pod_ready.go:94] pod "kube-scheduler-ubuntu-20-agent-6" is "Ready"
I1018 14:00:11.584133 387939 pod_ready.go:86] duration metric: took 400.022334ms for pod "kube-scheduler-ubuntu-20-agent-6" in "kube-system" namespace to be "Ready" or be gone ...
I1018 14:00:11.584149 387939 pod_ready.go:40] duration metric: took 1.604321095s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1018 14:00:11.630734 387939 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
I1018 14:00:11.632602 387939 out.go:179] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
==> Docker <==
Oct 18 14:00:33 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:00:33.081881577Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 18 14:00:40 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:00:40.599722006Z" level=warning msg="reference for unknown type: " digest="sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242" remote="docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242"
Oct 18 14:00:41 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:00:41.083895107Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 18 14:00:41 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:00:41.328414358Z" level=warning msg="reference for unknown type: " digest="sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624" remote="docker.io/marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
Oct 18 14:00:41 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:00:41.805442249Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 18 14:00:44 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:00:44.598527869Z" level=warning msg="reference for unknown type: " digest="sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34" remote="docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
Oct 18 14:00:45 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:00:45.077148476Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 18 14:02:02 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:02:02.602379562Z" level=warning msg="reference for unknown type: " digest="sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242" remote="docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242"
Oct 18 14:02:03 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:02:03.361947914Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 18 14:02:03 ubuntu-20-agent-6 cri-dockerd[388542]: time="2025-10-18T14:02:03Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242: docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242: Pulling from volcanosh/vc-controller-manager"
Oct 18 14:02:04 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:02:04.597839942Z" level=warning msg="reference for unknown type: " digest="sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001" remote="docker.io/volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001"
Oct 18 14:02:05 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:02:05.077337038Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 18 14:02:10 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:02:10.598730103Z" level=warning msg="reference for unknown type: " digest="sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624" remote="docker.io/marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
Oct 18 14:02:11 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:02:11.076366567Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 18 14:02:12 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:02:12.598969643Z" level=warning msg="reference for unknown type: " digest="sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34" remote="docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
Oct 18 14:02:13 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:02:13.075618593Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 18 14:04:51 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:04:51.600845494Z" level=warning msg="reference for unknown type: " digest="sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242" remote="docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242"
Oct 18 14:04:52 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:04:52.361797380Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 18 14:04:52 ubuntu-20-agent-6 cri-dockerd[388542]: time="2025-10-18T14:04:52Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242: docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242: Pulling from volcanosh/vc-controller-manager"
Oct 18 14:04:53 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:04:53.598996727Z" level=warning msg="reference for unknown type: " digest="sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34" remote="docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
Oct 18 14:04:54 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:04:54.075994930Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 18 14:04:54 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:04:54.599968737Z" level=warning msg="reference for unknown type: " digest="sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001" remote="docker.io/volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001"
Oct 18 14:04:55 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:04:55.072834305Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 18 14:05:02 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:05:02.601863207Z" level=warning msg="reference for unknown type: " digest="sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624" remote="docker.io/marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
Oct 18 14:05:03 ubuntu-20-agent-6 dockerd[388158]: time="2025-10-18T14:05:03.146732316Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
95a5eb24db3f9 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7 6 minutes ago Running gcp-auth 0 dc9f26c27a3f5 gcp-auth-78565c9fb4-bm27f
3231be1c1ae84 registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f 6 minutes ago Running csi-snapshotter 0 f86d807c7395d csi-hostpathplugin-wb4bj
c7572505593fb registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 6 minutes ago Running csi-provisioner 0 f86d807c7395d csi-hostpathplugin-wb4bj
4f4565b15bf2b registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0 6 minutes ago Running liveness-probe 0 f86d807c7395d csi-hostpathplugin-wb4bj
6bf19a14c1418 registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 6 minutes ago Running hostpath 0 f86d807c7395d csi-hostpathplugin-wb4bj
0ef5c10500470 registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c 6 minutes ago Running node-driver-registrar 0 f86d807c7395d csi-hostpathplugin-wb4bj
6ea59c7d126c0 registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 6 minutes ago Running volume-snapshot-controller 0 ccb73156ae545 snapshot-controller-7d9fbc56b8-9vmbb
ab88884b4fae1 registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 6 minutes ago Running volume-snapshot-controller 0 2128b624eb2d2 snapshot-controller-7d9fbc56b8-g4gfd
5d6f5f918de4e registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 6 minutes ago Running csi-resizer 0 5e2ca1fbd790c csi-hostpath-resizer-0
88556306f775d registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b 6 minutes ago Running csi-attacher 0 a1099d55076f0 csi-hostpath-attacher-0
7d9dc5cfc0cca registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c 7 minutes ago Running csi-external-health-monitor-controller 0 f86d807c7395d csi-hostpathplugin-wb4bj
338eb523635de ghcr.io/inspektor-gadget/inspektor-gadget@sha256:df0516c4c988694d65b19400d0990f129d5fd68f211cc826e7fdad55140626fd 7 minutes ago Running gadget 0 654932130afbf gadget-ds5bz
a8f14ff054678 gcr.io/k8s-minikube/kube-registry-proxy@sha256:f832bbe1d48c62de040bd793937eaa0c05d2f945a55376a99c80a4dd9961aeb1 7 minutes ago Running registry-proxy 0 662104b29d326 registry-proxy-pz778
578b324657d70 registry.k8s.io/metrics-server/metrics-server@sha256:89258156d0e9af60403eafd44da9676fd66f600c7934d468ccc17e42b199aee2 7 minutes ago Running metrics-server 0 37cbbe17f765b metrics-server-85b7d694d7-bw42j
00dfe8471be12 registry@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e 7 minutes ago Running registry 0 0996ed43c8c22 registry-6b586f9694-9q9zx
d115b5b43f5d9 gcr.io/cloud-spanner-emulator/emulator@sha256:335f6daa572494373ab0e16f6f574aced7425f3755182faf42089f838d6f38e1 7 minutes ago Running cloud-spanner-emulator 0 817aab40c13e0 cloud-spanner-emulator-86bd5cbb97-7swl7
73070fba9cbbe nvcr.io/nvidia/k8s-device-plugin@sha256:3c54348fe5a57e5700e7d8068e7531d2ef2d5f3ccb70c8f6bac0953432527abd 7 minutes ago Running nvidia-device-plugin-ctr 0 7a8482c62a3d2 nvidia-device-plugin-daemonset-6ckqn
604c91c187216 rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 7 minutes ago Running amd-gpu-device-plugin 0 d0bfb88877407 amd-gpu-device-plugin-jtlbj
ff4647d21b1ff 6e38f40d628db 7 minutes ago Running storage-provisioner 0 177ae860491a8 storage-provisioner
d620257aaf935 52546a367cc9e 7 minutes ago Running coredns 0 fa6cab67c971d coredns-66bc5c9577-ppk98
7cada6cadb726 fc25172553d79 7 minutes ago Running kube-proxy 0 a03a32ed1446b kube-proxy-bmfv9
da10ffa19a775 5f1f5298c888d 7 minutes ago Running etcd 0 d6488fd8f241b etcd-ubuntu-20-agent-6
1398f3ba960fa c80c8dbafe7dd 7 minutes ago Running kube-controller-manager 0 1e0cc8775b943 kube-controller-manager-ubuntu-20-agent-6
a03a41cbf2911 7dd6aaa1717ab 7 minutes ago Running kube-scheduler 0 7659c101647a6 kube-scheduler-ubuntu-20-agent-6
022a004f35df6 c3994bc696102 7 minutes ago Running kube-apiserver 0 4741c7f07272d kube-apiserver-ubuntu-20-agent-6
==> coredns [d620257aaf93] <==
maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 876af57068f747144f204884e843f6792435faec005aab1f10bd81e6ffca54e010e4374994d8f544c4f6711272ab5662d0892980e63ccc3ba8ba9e3fbcc5e4d9
[INFO] Reloading complete
[INFO] 127.0.0.1:57881 - 17368 "HINFO IN 5946551558577910388.6342905566971241795. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.005518181s
[INFO] 10.244.0.23:49133 - 38371 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000365444s
[INFO] 10.244.0.23:42340 - 40969 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000152468s
[INFO] 10.244.0.23:45349 - 30242 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000101943s
[INFO] 10.244.0.23:56507 - 19058 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000091675s
[INFO] 10.244.0.23:44888 - 47721 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000123891s
[INFO] 10.244.0.23:59401 - 40285 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000177893s
[INFO] 10.244.0.23:59405 - 38751 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.00146405s
[INFO] 10.244.0.23:48674 - 4654 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.002200748s
[INFO] 10.244.0.23:55851 - 63336 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.002728809s
[INFO] 10.244.0.23:34635 - 24842 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.00605845s
[INFO] 10.244.0.23:50179 - 57993 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.002571993s
[INFO] 10.244.0.23:51353 - 9756 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00305883s
[INFO] 10.244.0.23:51920 - 21904 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002177662s
[INFO] 10.244.0.23:58910 - 18469 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00241384s
[INFO] 10.244.0.23:42057 - 64859 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00126789s
[INFO] 10.244.0.23:35451 - 16601 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.001561735s
==> describe nodes <==
Name: ubuntu-20-agent-6
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ubuntu-20-agent-6
kubernetes.io/os=linux
minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
minikube.k8s.io/name=minikube
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_10_18T13_58_36_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=ubuntu-20-agent-6
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-6"}
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 18 Oct 2025 13:58:33 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ubuntu-20-agent-6
AcquireTime: <unset>
RenewTime: Sat, 18 Oct 2025 14:06:04 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 18 Oct 2025 13:59:36 +0000 Sat, 18 Oct 2025 13:58:31 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 18 Oct 2025 13:59:36 +0000 Sat, 18 Oct 2025 13:58:31 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 18 Oct 2025 13:59:36 +0000 Sat, 18 Oct 2025 13:58:31 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 18 Oct 2025 13:59:36 +0000 Sat, 18 Oct 2025 13:58:34 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.154.0.2
Hostname: ubuntu-20-agent-6
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863452Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863452Ki
pods: 110
System Info:
Machine ID: 591c9f1229383743e2bfc56a050d43d1
System UUID: 4944fbb2-5921-58ec-4846-2b7d3bcb94ac
Boot ID: 5abcca8a-7dd2-4d0b-8aeb-306bbc2c257c
Kernel Version: 6.8.0-1041-gcp
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://28.5.1
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (26 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default cloud-spanner-emulator-86bd5cbb97-7swl7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m32s
gadget gadget-ds5bz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m31s
gcp-auth gcp-auth-78565c9fb4-bm27f 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m25s
kube-system amd-gpu-device-plugin-jtlbj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m32s
kube-system coredns-66bc5c9577-ppk98 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 7m32s
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m31s
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m31s
kube-system csi-hostpathplugin-wb4bj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m31s
kube-system etcd-ubuntu-20-agent-6 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 7m39s
kube-system kube-apiserver-ubuntu-20-agent-6 250m (3%) 0 (0%) 0 (0%) 0 (0%) 7m39s
kube-system kube-controller-manager-ubuntu-20-agent-6 200m (2%) 0 (0%) 0 (0%) 0 (0%) 7m39s
kube-system kube-proxy-bmfv9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m32s
kube-system kube-scheduler-ubuntu-20-agent-6 100m (1%) 0 (0%) 0 (0%) 0 (0%) 7m39s
kube-system metrics-server-85b7d694d7-bw42j 100m (1%) 0 (0%) 200Mi (0%) 0 (0%) 7m32s
kube-system nvidia-device-plugin-daemonset-6ckqn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m32s
kube-system registry-6b586f9694-9q9zx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m32s
kube-system registry-creds-764b6fb674-4h5dd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m32s
kube-system registry-proxy-pz778 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m32s
kube-system snapshot-controller-7d9fbc56b8-9vmbb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m30s
kube-system snapshot-controller-7d9fbc56b8-g4gfd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m30s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m31s
volcano-system volcano-admission-6c447bd768-wr27z 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m31s
volcano-system volcano-admission-init-29wdz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m31s
volcano-system volcano-controllers-6fd4f85cb8-qvrf9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m30s
volcano-system volcano-scheduler-76c996c8bf-xpx6k 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m30s
yakd-dashboard yakd-dashboard-5ff678cb9-m5q8q 0 (0%) 0 (0%) 128Mi (0%) 256Mi (0%) 7m31s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%) 0 (0%)
memory 498Mi (1%) 426Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 7m30s kube-proxy
Normal Starting 7m38s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 7m38s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 7m38s kubelet Node ubuntu-20-agent-6 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m38s kubelet Node ubuntu-20-agent-6 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m38s kubelet Node ubuntu-20-agent-6 status is now: NodeHasSufficientPID
Normal RegisteredNode 7m34s node-controller Node ubuntu-20-agent-6 event: Registered Node ubuntu-20-agent-6 in Controller
==> dmesg <==
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 b3 4d e1 84 14 08 06
[ +0.000471] IPv4: martian source 10.244.0.9 from 10.244.0.3, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a c0 2b 98 6b fc 08 06
[ +0.010422] IPv4: martian source 10.244.0.9 from 10.244.0.8, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 c1 76 20 d1 ef 08 06
[ +3.594908] IPv4: martian source 10.244.0.1 from 10.244.0.12, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 59 7a 96 5e f0 08 06
[ +4.890376] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 09 43 d4 b2 56 08 06
[ +0.035206] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 24 81 23 26 65 08 06
[ +1.471693] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 e2 24 a8 60 17 08 06
[ +0.039121] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e 89 1b 62 16 4d 08 06
[ +1.992500] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 67 f3 0c ac 02 08 06
[ +0.186024] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 65 85 04 e8 54 08 06
[ +0.010241] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 26 41 4b fb 2a 08 06
[ +5.710883] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 12 ab 1f 55 3c 86 08 06
[ +0.000511] IPv4: martian source 10.244.0.23 from 10.244.0.3, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 1a c0 2b 98 6b fc 08 06
==> etcd [da10ffa19a77] <==
{"level":"warn","ts":"2025-10-18T13:58:32.478425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52676","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-18T13:58:32.489716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52700","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-18T13:58:32.496020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52716","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-18T13:58:32.502190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52732","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-18T13:58:44.224342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37954","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-18T13:58:44.234960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37968","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-18T13:58:49.801859Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.074365ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-ubuntu-20-agent-6\" limit:1 ","response":"range_response_count:1 size:4994"}
{"level":"info","ts":"2025-10-18T13:58:49.801981Z","caller":"traceutil/trace.go:172","msg":"trace[2080024868] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-ubuntu-20-agent-6; range_end:; response_count:1; response_revision:883; }","duration":"137.232648ms","start":"2025-10-18T13:58:49.664734Z","end":"2025-10-18T13:58:49.801967Z","steps":["trace[2080024868] 'range keys from in-memory index tree' (duration: 136.889408ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-18T13:58:56.173135Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.336842ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-10-18T13:58:56.173239Z","caller":"traceutil/trace.go:172","msg":"trace[875379153] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:906; }","duration":"129.463362ms","start":"2025-10-18T13:58:56.043760Z","end":"2025-10-18T13:58:56.173223Z","steps":["trace[875379153] 'range keys from in-memory index tree' (duration: 124.639881ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-18T13:58:56.173822Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.789399ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13698993455330819790 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:903 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
{"level":"info","ts":"2025-10-18T13:58:56.173937Z","caller":"traceutil/trace.go:172","msg":"trace[1932997164] transaction","detail":"{read_only:false; response_revision:907; number_of_response:1; }","duration":"239.667381ms","start":"2025-10-18T13:58:55.934255Z","end":"2025-10-18T13:58:56.173922Z","steps":["trace[1932997164] 'process raft request' (duration: 114.18973ms)","trace[1932997164] 'compare' (duration: 124.666134ms)"],"step_count":2}
{"level":"warn","ts":"2025-10-18T13:59:09.975354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53084","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-18T13:59:09.986208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53110","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-18T13:59:10.000278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53112","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-18T13:59:10.008695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53134","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-18T13:59:10.039883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53144","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-18T13:59:10.099722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53164","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-18T13:59:10.132107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53180","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-18T13:59:10.144202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53208","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-18T13:59:10.154836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53212","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-18T13:59:10.166711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53230","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-18T13:59:10.180935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53236","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-18T13:59:10.194497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53252","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-18T13:59:10.202563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53276","server-name":"","error":"EOF"}
==> gcp-auth [95a5eb24db3f] <==
2025/10/18 13:59:25 GCP Auth Webhook started!
==> kernel <==
14:06:13 up 1:48, 0 users, load average: 0.20, 0.87, 1.69
Linux ubuntu-20-agent-6 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kube-apiserver [022a004f35df] <==
W1018 13:59:09.975386 1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1018 13:59:09.986212 1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1018 13:59:10.000192 1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1018 13:59:10.008517 1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1018 13:59:10.039852 1 logging.go:55] [core] [Channel #286 SubChannel #287]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
W1018 13:59:10.099697 1 logging.go:55] [core] [Channel #290 SubChannel #291]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1018 13:59:10.132123 1 logging.go:55] [core] [Channel #294 SubChannel #295]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
W1018 13:59:10.144232 1 logging.go:55] [core] [Channel #298 SubChannel #299]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1018 13:59:10.154778 1 logging.go:55] [core] [Channel #302 SubChannel #303]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1018 13:59:10.166726 1 logging.go:55] [core] [Channel #306 SubChannel #307]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1018 13:59:10.181047 1 logging.go:55] [core] [Channel #310 SubChannel #311]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
W1018 13:59:10.194477 1 logging.go:55] [core] [Channel #314 SubChannel #315]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
W1018 13:59:10.202392 1 logging.go:55] [core] [Channel #318 SubChannel #319]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
E1018 13:59:15.733150 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.183.179:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.183.179:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.183.179:443: connect: connection refused" logger="UnhandledError"
W1018 13:59:15.733227 1 handler_proxy.go:99] no RequestInfo found in the context
E1018 13:59:15.733321 1 controller.go:146] "Unhandled Error" err=<
Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
E1018 13:59:15.733681 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.183.179:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.183.179:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.183.179:443: connect: connection refused" logger="UnhandledError"
E1018 13:59:15.738955 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.183.179:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.183.179:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.183.179:443: connect: connection refused" logger="UnhandledError"
E1018 13:59:15.759701 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.183.179:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.183.179:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.183.179:443: connect: connection refused" logger="UnhandledError"
E1018 13:59:15.800235 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.183.179:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.183.179:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.183.179:443: connect: connection refused" logger="UnhandledError"
E1018 13:59:15.880969 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.183.179:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.183.179:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.183.179:443: connect: connection refused" logger="UnhandledError"
I1018 13:59:16.071828 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
==> kube-controller-manager [1398f3ba960f] <==
I1018 13:58:39.938878 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
I1018 13:58:39.938904 1 shared_informer.go:356] "Caches are synced" controller="attach detach"
I1018 13:58:39.938890 1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
I1018 13:58:39.940135 1 shared_informer.go:356] "Caches are synced" controller="job"
I1018 13:58:39.941121 1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
I1018 13:58:39.942418 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1018 13:58:39.944558 1 shared_informer.go:356] "Caches are synced" controller="taint"
I1018 13:58:39.944634 1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
I1018 13:58:39.944795 1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ubuntu-20-agent-6"
I1018 13:58:39.944853 1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
I1018 13:58:39.947119 1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
I1018 13:58:39.959461 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1018 13:59:09.948771 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I1018 13:59:09.948973 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch.volcano.sh"
I1018 13:59:09.949021 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podgroups.scheduling.volcano.sh"
I1018 13:59:09.949052 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
I1018 13:59:09.949082 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch.volcano.sh"
I1018 13:59:09.949117 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="commands.bus.volcano.sh"
I1018 13:59:09.949157 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobflows.flow.volcano.sh"
I1018 13:59:09.949187 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobtemplates.flow.volcano.sh"
I1018 13:59:09.949276 1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
I1018 13:59:09.968896 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
I1018 13:59:09.974476 1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
I1018 13:59:11.249970 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1018 13:59:11.275117 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
==> kube-proxy [7cada6cadb72] <==
I1018 13:58:42.446135 1 server_linux.go:53] "Using iptables proxy"
I1018 13:58:42.562963 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1018 13:58:42.663170 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1018 13:58:42.663215 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["10.154.0.2"]
E1018 13:58:42.665964 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1018 13:58:42.811516 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1018 13:58:42.815547 1 server_linux.go:132] "Using iptables Proxier"
I1018 13:58:42.847608 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1018 13:58:42.852619 1 server.go:527] "Version info" version="v1.34.1"
I1018 13:58:42.852714 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1018 13:58:42.857872 1 config.go:200] "Starting service config controller"
I1018 13:58:42.857985 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1018 13:58:42.858383 1 config.go:106] "Starting endpoint slice config controller"
I1018 13:58:42.858616 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1018 13:58:42.863430 1 config.go:403] "Starting serviceCIDR config controller"
I1018 13:58:42.866258 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1018 13:58:42.863477 1 config.go:309] "Starting node config controller"
I1018 13:58:42.866358 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1018 13:58:42.866369 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1018 13:58:42.961515 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1018 13:58:42.963546 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1018 13:58:42.966587 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
==> kube-scheduler [a03a41cbf291] <==
E1018 13:58:32.957263 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1018 13:58:32.957364 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1018 13:58:32.957417 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1018 13:58:32.957427 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1018 13:58:32.957449 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1018 13:58:32.957469 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1018 13:58:32.957478 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1018 13:58:32.957526 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1018 13:58:32.957942 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1018 13:58:32.958024 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1018 13:58:32.959009 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1018 13:58:33.784718 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1018 13:58:33.787971 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1018 13:58:33.799550 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1018 13:58:33.829964 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1018 13:58:33.839511 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1018 13:58:33.844832 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1018 13:58:33.915550 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1018 13:58:33.959640 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1018 13:58:34.002782 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1018 13:58:34.010007 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1018 13:58:34.033408 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1018 13:58:34.081750 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1018 13:58:34.372069 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
I1018 13:58:36.154076 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Oct 18 14:05:05 ubuntu-20-agent-6 kubelet[389524]: E1018 14:05:05.361051 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-76c996c8bf-xpx6k" podUID="b36e9518-42d7-4650-86e6-facb44dadd1c"
Oct 18 14:05:06 ubuntu-20-agent-6 kubelet[389524]: E1018 14:05:06.360402 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-6fd4f85cb8-qvrf9" podUID="7b2b31d9-f6ef-4291-9315-c12bd396f755"
Oct 18 14:05:09 ubuntu-20-agent-6 kubelet[389524]: E1018 14:05:09.360966 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-29wdz" podUID="2848dc91-55c1-48db-8cfe-6257aa2f79c6"
Oct 18 14:05:15 ubuntu-20-agent-6 kubelet[389524]: E1018 14:05:15.360665 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-4h5dd" podUID="2eaa7830-2c9b-4d0a-b54e-3d6b21274de9"
Oct 18 14:05:15 ubuntu-20-agent-6 kubelet[389524]: E1018 14:05:15.363100 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-m5q8q" podUID="024ebddf-f4a3-4cbd-9650-3c84de941534"
Oct 18 14:05:17 ubuntu-20-agent-6 kubelet[389524]: E1018 14:05:17.360348 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-76c996c8bf-xpx6k" podUID="b36e9518-42d7-4650-86e6-facb44dadd1c"
Oct 18 14:05:18 ubuntu-20-agent-6 kubelet[389524]: E1018 14:05:18.360347 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-6fd4f85cb8-qvrf9" podUID="7b2b31d9-f6ef-4291-9315-c12bd396f755"
Oct 18 14:05:18 ubuntu-20-agent-6 kubelet[389524]: E1018 14:05:18.362424 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[admission-certs], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="volcano-system/volcano-admission-6c447bd768-wr27z" podUID="b899284a-e849-40f7-90d3-03e1a83df770"
Oct 18 14:05:23 ubuntu-20-agent-6 kubelet[389524]: E1018 14:05:23.359888 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-29wdz" podUID="2848dc91-55c1-48db-8cfe-6257aa2f79c6"
Oct 18 14:05:27 ubuntu-20-agent-6 kubelet[389524]: E1018 14:05:27.362957 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-m5q8q" podUID="024ebddf-f4a3-4cbd-9650-3c84de941534"
Oct 18 14:05:31 ubuntu-20-agent-6 kubelet[389524]: E1018 14:05:31.360176 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-6fd4f85cb8-qvrf9" podUID="7b2b31d9-f6ef-4291-9315-c12bd396f755"
Oct 18 14:05:31 ubuntu-20-agent-6 kubelet[389524]: E1018 14:05:31.360263 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-76c996c8bf-xpx6k" podUID="b36e9518-42d7-4650-86e6-facb44dadd1c"
Oct 18 14:05:36 ubuntu-20-agent-6 kubelet[389524]: E1018 14:05:36.359909 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-29wdz" podUID="2848dc91-55c1-48db-8cfe-6257aa2f79c6"
Oct 18 14:05:41 ubuntu-20-agent-6 kubelet[389524]: E1018 14:05:41.375585 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-m5q8q" podUID="024ebddf-f4a3-4cbd-9650-3c84de941534"
Oct 18 14:05:45 ubuntu-20-agent-6 kubelet[389524]: E1018 14:05:45.360740 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-76c996c8bf-xpx6k" podUID="b36e9518-42d7-4650-86e6-facb44dadd1c"
Oct 18 14:05:46 ubuntu-20-agent-6 kubelet[389524]: E1018 14:05:46.360934 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-6fd4f85cb8-qvrf9" podUID="7b2b31d9-f6ef-4291-9315-c12bd396f755"
Oct 18 14:05:47 ubuntu-20-agent-6 kubelet[389524]: E1018 14:05:47.369855 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-29wdz" podUID="2848dc91-55c1-48db-8cfe-6257aa2f79c6"
Oct 18 14:05:55 ubuntu-20-agent-6 kubelet[389524]: E1018 14:05:55.363522 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-m5q8q" podUID="024ebddf-f4a3-4cbd-9650-3c84de941534"
Oct 18 14:05:56 ubuntu-20-agent-6 kubelet[389524]: E1018 14:05:56.360046 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-76c996c8bf-xpx6k" podUID="b36e9518-42d7-4650-86e6-facb44dadd1c"
Oct 18 14:05:57 ubuntu-20-agent-6 kubelet[389524]: E1018 14:05:57.360337 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-6fd4f85cb8-qvrf9" podUID="7b2b31d9-f6ef-4291-9315-c12bd396f755"
Oct 18 14:05:58 ubuntu-20-agent-6 kubelet[389524]: E1018 14:05:58.360477 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-29wdz" podUID="2848dc91-55c1-48db-8cfe-6257aa2f79c6"
Oct 18 14:06:08 ubuntu-20-agent-6 kubelet[389524]: E1018 14:06:08.360322 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-76c996c8bf-xpx6k" podUID="b36e9518-42d7-4650-86e6-facb44dadd1c"
Oct 18 14:06:08 ubuntu-20-agent-6 kubelet[389524]: E1018 14:06:08.362569 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-m5q8q" podUID="024ebddf-f4a3-4cbd-9650-3c84de941534"
Oct 18 14:06:09 ubuntu-20-agent-6 kubelet[389524]: E1018 14:06:09.360552 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-29wdz" podUID="2848dc91-55c1-48db-8cfe-6257aa2f79c6"
Oct 18 14:06:10 ubuntu-20-agent-6 kubelet[389524]: E1018 14:06:10.360097 389524 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-6fd4f85cb8-qvrf9" podUID="7b2b31d9-f6ef-4291-9315-c12bd396f755"
==> storage-provisioner [ff4647d21b1f] <==
W1018 14:05:47.870968 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:05:49.874442 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:05:49.878600 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:05:51.882445 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:05:51.886663 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:05:53.890458 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:05:53.894689 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:05:55.899000 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:05:55.904718 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:05:57.908504 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:05:57.913829 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:05:59.917279 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:05:59.922655 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:06:01.926596 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:06:01.930849 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:06:03.933971 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:06:03.938720 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:06:05.943013 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:06:05.947945 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:06:07.951763 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:06:07.957971 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:06:09.961492 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:06:09.965641 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:06:11.968907 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 14:06:11.973361 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:269: (dbg) Run: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: registry-creds-764b6fb674-4h5dd volcano-admission-6c447bd768-wr27z volcano-admission-init-29wdz volcano-controllers-6fd4f85cb8-qvrf9 volcano-scheduler-76c996c8bf-xpx6k yakd-dashboard-5ff678cb9-m5q8q
helpers_test.go:282: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context minikube describe pod registry-creds-764b6fb674-4h5dd volcano-admission-6c447bd768-wr27z volcano-admission-init-29wdz volcano-controllers-6fd4f85cb8-qvrf9 volcano-scheduler-76c996c8bf-xpx6k yakd-dashboard-5ff678cb9-m5q8q
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context minikube describe pod registry-creds-764b6fb674-4h5dd volcano-admission-6c447bd768-wr27z volcano-admission-init-29wdz volcano-controllers-6fd4f85cb8-qvrf9 volcano-scheduler-76c996c8bf-xpx6k yakd-dashboard-5ff678cb9-m5q8q: exit status 1 (71.565755ms)
** stderr **
Error from server (NotFound): pods "registry-creds-764b6fb674-4h5dd" not found
Error from server (NotFound): pods "volcano-admission-6c447bd768-wr27z" not found
Error from server (NotFound): pods "volcano-admission-init-29wdz" not found
Error from server (NotFound): pods "volcano-controllers-6fd4f85cb8-qvrf9" not found
Error from server (NotFound): pods "volcano-scheduler-76c996c8bf-xpx6k" not found
Error from server (NotFound): pods "yakd-dashboard-5ff678cb9-m5q8q" not found
** /stderr **
helpers_test.go:287: kubectl --context minikube describe pod registry-creds-764b6fb674-4h5dd volcano-admission-6c447bd768-wr27z volcano-admission-init-29wdz volcano-controllers-6fd4f85cb8-qvrf9 volcano-scheduler-76c996c8bf-xpx6k yakd-dashboard-5ff678cb9-m5q8q: exit status 1
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (11.538548458s)
--- FAIL: TestAddons/serial/Volcano (373.70s)