=== RUN TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 8.793155ms
addons_test.go:815: volcano-admission stabilized in 8.856229ms
addons_test.go:807: volcano-scheduler stabilized in 8.898522ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-kkrdq" [eca17150-2673-4431-a0cc-079a7c574525] Pending / Ready:ContainersNotReady (containers with unready status: [volcano-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [volcano-scheduler])
addons_test.go:829: ***** TestAddons/serial/Volcano: pod "app=volcano-scheduler" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:829: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
addons_test.go:829: TestAddons/serial/Volcano: showing logs for failed pods as of 2025-04-07 12:53:54.220048742 +0000 UTC m=+542.002069108
addons_test.go:829: (dbg) Run: kubectl --context minikube describe po volcano-scheduler-75fdd99bcf-kkrdq -n volcano-system
addons_test.go:829: (dbg) kubectl --context minikube describe po volcano-scheduler-75fdd99bcf-kkrdq -n volcano-system:
Name: volcano-scheduler-75fdd99bcf-kkrdq
Namespace: volcano-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Service Account: volcano-scheduler
Node: ubuntu-20-agent/10.132.0.4
Start Time: Mon, 07 Apr 2025 12:46:23 +0000
Labels: app=volcano-scheduler
pod-template-hash=75fdd99bcf
Annotations: <none>
Status: Pending
IP: 10.244.0.19
IPs:
IP: 10.244.0.19
Controlled By: ReplicaSet/volcano-scheduler-75fdd99bcf
Containers:
volcano-scheduler:
Container ID:
Image: docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86
Image ID:
Port: <none>
Host Port: <none>
Args:
--logtostderr
--scheduler-conf=/volcano.scheduler/volcano-scheduler.conf
--enable-healthz=true
--enable-metrics=true
--leader-elect=false
--kube-api-qps=2000
--kube-api-burst=2000
--schedule-period=1s
--node-worker-threads=20
-v=3
2>&1
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
DEBUG_SOCKET_DIR: /tmp/klog-socks
Mounts:
/tmp/klog-socks from klog-sock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mbqtk (ro)
/volcano.scheduler from scheduler-config (rw)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
scheduler-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: volcano-scheduler-configmap
Optional: false
klog-sock:
Type: HostPath (bare host directory volume)
Path: /tmp/klog-socks
HostPathType:
kube-api-access-mbqtk:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m31s default-scheduler Successfully assigned volcano-system/volcano-scheduler-75fdd99bcf-kkrdq to ubuntu-20-agent
Normal Pulling 4m (x5 over 7m30s) kubelet Pulling image "docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86"
Warning Failed 3m59s (x5 over 6m55s) kubelet Failed to pull image "docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 3m59s (x5 over 6m55s) kubelet Error: ErrImagePull
Warning Failed 115s (x20 over 6m55s) kubelet Error: ImagePullBackOff
Normal BackOff 100s (x21 over 6m55s) kubelet Back-off pulling image "docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86"
addons_test.go:829: (dbg) Run: kubectl --context minikube logs volcano-scheduler-75fdd99bcf-kkrdq -n volcano-system
addons_test.go:829: (dbg) Non-zero exit: kubectl --context minikube logs volcano-scheduler-75fdd99bcf-kkrdq -n volcano-system: exit status 1 (78.987491ms)
** stderr **
Error from server (BadRequest): container "volcano-scheduler" in pod "volcano-scheduler-75fdd99bcf-kkrdq" is waiting to start: trying and failing to pull image
** /stderr **
addons_test.go:829: kubectl --context minikube logs volcano-scheduler-75fdd99bcf-kkrdq -n volcano-system: exit status 1
addons_test.go:830: failed waiting for app=volcano-scheduler pod: app=volcano-scheduler within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/serial/Volcano logs:
-- stdout --
==> Audit <==
|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | minikube | jenkins | v1.35.0 | 07 Apr 25 12:44 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
| delete | -p minikube | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
| start | -o=json --download-only | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
| delete | -p minikube | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
| delete | -p minikube | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
| delete | -p minikube | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
| start | --download-only -p | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | |
| | minikube --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:38191 | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
| start | -p minikube --alsologtostderr | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
| | -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:46 UTC |
| addons | enable dashboard -p minikube | minikube | jenkins | v1.35.0 | 07 Apr 25 12:46 UTC | |
| addons | disable dashboard -p minikube | minikube | jenkins | v1.35.0 | 07 Apr 25 12:46 UTC | |
| start | -p minikube --wait=true | minikube | jenkins | v1.35.0 | 07 Apr 25 12:46 UTC | 07 Apr 25 12:47 UTC |
| | --memory=4000 | | | | | |
| | --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --addons=amd-gpu-device-plugin | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/04/07 12:46:01
Running on machine: ubuntu-20-agent
Binary: Built with gc go1.24.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0407 12:46:01.231062 1429316 out.go:345] Setting OutFile to fd 1 ...
I0407 12:46:01.231195 1429316 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:46:01.231206 1429316 out.go:358] Setting ErrFile to fd 2...
I0407 12:46:01.231210 1429316 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:46:01.231464 1429316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1418173/.minikube/bin
I0407 12:46:01.232140 1429316 out.go:352] Setting JSON to false
I0407 12:46:01.233179 1429316 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":16105,"bootTime":1744013856,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0407 12:46:01.233311 1429316 start.go:139] virtualization: kvm guest
I0407 12:46:01.235474 1429316 out.go:177] * minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
W0407 12:46:01.236694 1429316 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20598-1418173/.minikube/cache/preloaded-tarball: no such file or directory
I0407 12:46:01.236729 1429316 out.go:177] - MINIKUBE_LOCATION=20598
I0407 12:46:01.236731 1429316 notify.go:220] Checking for updates...
I0407 12:46:01.239515 1429316 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0407 12:46:01.240993 1429316 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20598-1418173/kubeconfig
I0407 12:46:01.242159 1429316 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1418173/.minikube
I0407 12:46:01.243419 1429316 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0407 12:46:01.244910 1429316 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0407 12:46:01.246416 1429316 driver.go:394] Setting default libvirt URI to qemu:///system
I0407 12:46:01.257114 1429316 out.go:177] * Using the none driver based on user configuration
I0407 12:46:01.258434 1429316 start.go:297] selected driver: none
I0407 12:46:01.258453 1429316 start.go:901] validating driver "none" against <nil>
I0407 12:46:01.258480 1429316 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0407 12:46:01.258516 1429316 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
W0407 12:46:01.258825 1429316 out.go:270] ! The 'none' driver does not respect the --memory flag
I0407 12:46:01.259483 1429316 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0407 12:46:01.259773 1429316 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0407 12:46:01.259810 1429316 cni.go:84] Creating CNI manager for ""
I0407 12:46:01.259875 1429316 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0407 12:46:01.259906 1429316 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0407 12:46:01.259962 1429316 start.go:340] cluster config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0407 12:46:01.261473 1429316 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
I0407 12:46:01.262965 1429316 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/config.json ...
I0407 12:46:01.263009 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/config.json: {Name:mk7435778f484db7c9644d73cb119c70d439299f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:46:01.263157 1429316 start.go:360] acquireMachinesLock for minikube: {Name:mk53793948be750dfc684af85278e6856b44afc9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0407 12:46:01.263242 1429316 start.go:364] duration metric: took 28.329µs to acquireMachinesLock for "minikube"
I0407 12:46:01.263265 1429316 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0407 12:46:01.263340 1429316 start.go:125] createHost starting for "" (driver="none")
I0407 12:46:01.265117 1429316 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
I0407 12:46:01.267404 1429316 exec_runner.go:51] Run: systemctl --version
I0407 12:46:01.270063 1429316 start.go:159] libmachine.API.Create for "minikube" (driver="none")
I0407 12:46:01.270101 1429316 client.go:168] LocalClient.Create starting
I0407 12:46:01.270187 1429316 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20598-1418173/.minikube/certs/ca.pem
I0407 12:46:01.270218 1429316 main.go:141] libmachine: Decoding PEM data...
I0407 12:46:01.270234 1429316 main.go:141] libmachine: Parsing certificate...
I0407 12:46:01.270296 1429316 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20598-1418173/.minikube/certs/cert.pem
I0407 12:46:01.270319 1429316 main.go:141] libmachine: Decoding PEM data...
I0407 12:46:01.270329 1429316 main.go:141] libmachine: Parsing certificate...
I0407 12:46:01.270642 1429316 client.go:171] duration metric: took 532.06µs to LocalClient.Create
I0407 12:46:01.270666 1429316 start.go:167] duration metric: took 613.883µs to libmachine.API.Create "minikube"
I0407 12:46:01.270673 1429316 start.go:293] postStartSetup for "minikube" (driver="none")
I0407 12:46:01.270708 1429316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0407 12:46:01.270753 1429316 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0407 12:46:01.280436 1429316 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0407 12:46:01.280458 1429316 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0407 12:46:01.280466 1429316 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0407 12:46:01.282450 1429316 out.go:177] * OS release is Ubuntu 20.04.6 LTS
I0407 12:46:01.283786 1429316 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-1418173/.minikube/addons for local assets ...
I0407 12:46:01.283847 1429316 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-1418173/.minikube/files for local assets ...
I0407 12:46:01.283872 1429316 start.go:296] duration metric: took 13.189796ms for postStartSetup
I0407 12:46:01.284520 1429316 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/config.json ...
I0407 12:46:01.284674 1429316 start.go:128] duration metric: took 21.323169ms to createHost
I0407 12:46:01.284690 1429316 start.go:83] releasing machines lock for "minikube", held for 21.433196ms
I0407 12:46:01.285057 1429316 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0407 12:46:01.285154 1429316 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
W0407 12:46:01.287094 1429316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0407 12:46:01.287141 1429316 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0407 12:46:01.297196 1429316 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0407 12:46:01.297229 1429316 start.go:495] detecting cgroup driver to use...
I0407 12:46:01.297261 1429316 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0407 12:46:01.297368 1429316 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0407 12:46:01.319217 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0407 12:46:01.329584 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0407 12:46:01.338895 1429316 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0407 12:46:01.338957 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0407 12:46:01.349057 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0407 12:46:01.359932 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0407 12:46:01.375600 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0407 12:46:01.386405 1429316 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0407 12:46:01.396041 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0407 12:46:01.406577 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0407 12:46:01.429519 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0407 12:46:01.439514 1429316 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0407 12:46:01.448440 1429316 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0407 12:46:01.456361 1429316 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0407 12:46:01.690650 1429316 exec_runner.go:51] Run: sudo systemctl restart containerd
I0407 12:46:01.754996 1429316 start.go:495] detecting cgroup driver to use...
I0407 12:46:01.755055 1429316 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0407 12:46:01.755169 1429316 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0407 12:46:01.781838 1429316 exec_runner.go:51] Run: which cri-dockerd
I0407 12:46:01.782866 1429316 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0407 12:46:01.791549 1429316 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
I0407 12:46:01.791585 1429316 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0407 12:46:01.791637 1429316 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0407 12:46:01.800329 1429316 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0407 12:46:01.800548 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1817254808 /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0407 12:46:01.809824 1429316 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I0407 12:46:02.026255 1429316 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I0407 12:46:02.249916 1429316 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0407 12:46:02.250098 1429316 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
I0407 12:46:02.250116 1429316 exec_runner.go:203] rm: /etc/docker/daemon.json
I0407 12:46:02.250166 1429316 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
I0407 12:46:02.259552 1429316 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
I0407 12:46:02.259746 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1779169503 /etc/docker/daemon.json
I0407 12:46:02.268933 1429316 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0407 12:46:02.501531 1429316 exec_runner.go:51] Run: sudo systemctl restart docker
I0407 12:46:02.848272 1429316 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0407 12:46:02.861572 1429316 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
I0407 12:46:02.879408 1429316 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0407 12:46:02.890750 1429316 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
I0407 12:46:03.122082 1429316 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0407 12:46:03.361507 1429316 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0407 12:46:03.590334 1429316 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
I0407 12:46:03.605891 1429316 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0407 12:46:03.618044 1429316 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0407 12:46:03.839254 1429316 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
I0407 12:46:03.911084 1429316 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0407 12:46:03.911171 1429316 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I0407 12:46:03.912678 1429316 start.go:563] Will wait 60s for crictl version
I0407 12:46:03.912723 1429316 exec_runner.go:51] Run: which crictl
I0407 12:46:03.913606 1429316 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
I0407 12:46:03.947511 1429316 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 28.0.4
RuntimeApiVersion: v1
I0407 12:46:03.947603 1429316 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0407 12:46:03.971036 1429316 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0407 12:46:03.995613 1429316 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 28.0.4 ...
I0407 12:46:03.995718 1429316 exec_runner.go:51] Run: grep 127.0.0.1 host.minikube.internal$ /etc/hosts
I0407 12:46:03.998437 1429316 out.go:177] - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
I0407 12:46:03.999593 1429316 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.132.0.4 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0407 12:46:03.999705 1429316 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
I0407 12:46:03.999717 1429316 kubeadm.go:934] updating node { 10.132.0.4 8443 v1.32.2 docker true true} ...
I0407 12:46:03.999847 1429316 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.132.0.4 --resolv-conf=/run/systemd/resolve/resolv.conf
[Install]
config:
{KubernetesVersion:v1.32.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
I0407 12:46:03.999895 1429316 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
I0407 12:46:04.048035 1429316 cni.go:84] Creating CNI manager for ""
I0407 12:46:04.048071 1429316 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0407 12:46:04.048086 1429316 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0407 12:46:04.048111 1429316 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.132.0.4 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.132.0.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.132.0.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0407 12:46:04.048253 1429316 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.132.0.4
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ubuntu-20-agent"
kubeletExtraArgs:
- name: "node-ip"
value: "10.132.0.4"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "10.132.0.4"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0407 12:46:04.048321 1429316 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
I0407 12:46:04.057083 1429316 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: exit status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
Initiating transfer...
I0407 12:46:04.057170 1429316 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
I0407 12:46:04.065629 1429316 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256
I0407 12:46:04.065684 1429316 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0407 12:46:04.065685 1429316 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
I0407 12:46:04.065755 1429316 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256
I0407 12:46:04.065764 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
I0407 12:46:04.065802 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/cache/linux/amd64/v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
I0407 12:46:04.077514 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
I0407 12:46:04.124513 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3196352494 /var/lib/minikube/binaries/v1.32.2/kubectl
I0407 12:46:04.130593 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4160672640 /var/lib/minikube/binaries/v1.32.2/kubeadm
I0407 12:46:04.149975 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2575322256 /var/lib/minikube/binaries/v1.32.2/kubelet
I0407 12:46:04.230292 1429316 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0407 12:46:04.239456 1429316 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
I0407 12:46:04.239485 1429316 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0407 12:46:04.239525 1429316 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0407 12:46:04.247941 1429316 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
I0407 12:46:04.248129 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2812554544 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0407 12:46:04.256651 1429316 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
I0407 12:46:04.256679 1429316 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
I0407 12:46:04.256714 1429316 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
I0407 12:46:04.264872 1429316 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0407 12:46:04.265044 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2861707748 /lib/systemd/system/kubelet.service
I0407 12:46:04.273635 1429316 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
I0407 12:46:04.273784 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3026017713 /var/tmp/minikube/kubeadm.yaml.new
I0407 12:46:04.282029 1429316 exec_runner.go:51] Run: grep 10.132.0.4 control-plane.minikube.internal$ /etc/hosts
I0407 12:46:04.283624 1429316 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0407 12:46:04.517665 1429316 exec_runner.go:51] Run: sudo systemctl start kubelet
I0407 12:46:04.532121 1429316 certs.go:68] Setting up /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube for IP: 10.132.0.4
I0407 12:46:04.532154 1429316 certs.go:194] generating shared ca certs ...
I0407 12:46:04.532182 1429316 certs.go:226] acquiring lock for ca certs: {Name:mke037ea5f6110cd4db349ee47a4532de031e41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:46:04.532401 1429316 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20598-1418173/.minikube/ca.key
I0407 12:46:04.532475 1429316 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20598-1418173/.minikube/proxy-client-ca.key
I0407 12:46:04.532490 1429316 certs.go:256] generating profile certs ...
I0407 12:46:04.532571 1429316 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/client.key
I0407 12:46:04.532593 1429316 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/client.crt with IP's: []
I0407 12:46:04.746361 1429316 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/client.crt ...
I0407 12:46:04.746398 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/client.crt: {Name:mkd685522b407e574e9a17242256ea962f13d180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:46:04.746567 1429316 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/client.key ...
I0407 12:46:04.746584 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/client.key: {Name:mk93b3de66b65705ca976ab8fb0e07c53d19cd38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:46:04.746673 1429316 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.key.b039158f
I0407 12:46:04.746690 1429316 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.crt.b039158f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.132.0.4]
I0407 12:46:04.946265 1429316 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.crt.b039158f ...
I0407 12:46:04.946301 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.crt.b039158f: {Name:mkc92f9f9b71902112ff236a3fce9245b28fbc4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:46:04.946465 1429316 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.key.b039158f ...
I0407 12:46:04.946486 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.key.b039158f: {Name:mk8e0d10049da8458969638f3be970030e3a7c14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:46:04.946565 1429316 certs.go:381] copying /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.crt.b039158f -> /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.crt
I0407 12:46:04.946677 1429316 certs.go:385] copying /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.key.b039158f -> /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.key
I0407 12:46:04.946745 1429316 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.key
I0407 12:46:04.946768 1429316 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0407 12:46:05.422333 1429316 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.crt ...
I0407 12:46:05.422367 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.crt: {Name:mk657c14bd9f3b8cdc778a995b4cc49084dc96e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:46:05.422505 1429316 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.key ...
I0407 12:46:05.422521 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.key: {Name:mk04945c273de7864e5113cfa901b08a2b911d34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:46:05.422716 1429316 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1418173/.minikube/certs/ca-key.pem (1675 bytes)
I0407 12:46:05.422763 1429316 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1418173/.minikube/certs/ca.pem (1082 bytes)
I0407 12:46:05.422791 1429316 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1418173/.minikube/certs/cert.pem (1123 bytes)
I0407 12:46:05.422814 1429316 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1418173/.minikube/certs/key.pem (1675 bytes)
I0407 12:46:05.423465 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0407 12:46:05.423590 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3061948531 /var/lib/minikube/certs/ca.crt
I0407 12:46:05.432860 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0407 12:46:05.433025 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3068014908 /var/lib/minikube/certs/ca.key
I0407 12:46:05.443022 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0407 12:46:05.443199 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3928182364 /var/lib/minikube/certs/proxy-client-ca.crt
I0407 12:46:05.453837 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0407 12:46:05.453966 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2350601260 /var/lib/minikube/certs/proxy-client-ca.key
I0407 12:46:05.463595 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
I0407 12:46:05.463752 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2667253322 /var/lib/minikube/certs/apiserver.crt
I0407 12:46:05.473238 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0407 12:46:05.473362 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2852199116 /var/lib/minikube/certs/apiserver.key
I0407 12:46:05.482563 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0407 12:46:05.482740 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3185536542 /var/lib/minikube/certs/proxy-client.crt
I0407 12:46:05.491833 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0407 12:46:05.491981 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4000808512 /var/lib/minikube/certs/proxy-client.key
I0407 12:46:05.500441 1429316 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
I0407 12:46:05.500465 1429316 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
I0407 12:46:05.500497 1429316 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
I0407 12:46:05.508314 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0407 12:46:05.508471 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube385856075 /usr/share/ca-certificates/minikubeCA.pem
I0407 12:46:05.517215 1429316 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0407 12:46:05.517362 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2963500488 /var/lib/minikube/kubeconfig
I0407 12:46:05.526041 1429316 exec_runner.go:51] Run: openssl version
I0407 12:46:05.528924 1429316 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0407 12:46:05.537534 1429316 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0407 12:46:05.538811 1429316 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Apr 7 12:46 /usr/share/ca-certificates/minikubeCA.pem
I0407 12:46:05.538859 1429316 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0407 12:46:05.541631 1429316 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0407 12:46:05.552781 1429316 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0407 12:46:05.553844 1429316 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0407 12:46:05.553891 1429316 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.132.0.4 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0407 12:46:05.553998 1429316 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0407 12:46:05.570270 1429316 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0407 12:46:05.579733 1429316 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0407 12:46:05.595767 1429316 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0407 12:46:05.617394 1429316 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0407 12:46:05.627797 1429316 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0407 12:46:05.627825 1429316 kubeadm.go:157] found existing configuration files:
I0407 12:46:05.627872 1429316 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0407 12:46:05.636647 1429316 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0407 12:46:05.636704 1429316 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
I0407 12:46:05.644490 1429316 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0407 12:46:05.653066 1429316 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0407 12:46:05.653120 1429316 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0407 12:46:05.660877 1429316 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0407 12:46:05.670067 1429316 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0407 12:46:05.670133 1429316 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0407 12:46:05.678615 1429316 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0407 12:46:05.689345 1429316 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0407 12:46:05.689418 1429316 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0407 12:46:05.697526 1429316 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0407 12:46:05.733301 1429316 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
I0407 12:46:05.733366 1429316 kubeadm.go:310] [preflight] Running pre-flight checks
I0407 12:46:05.761513 1429316 kubeadm.go:310] [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
I0407 12:46:05.827926 1429316 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0407 12:46:05.827987 1429316 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0407 12:46:05.827995 1429316 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0407 12:46:05.828001 1429316 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0407 12:46:05.838908 1429316 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0407 12:46:05.842792 1429316 out.go:235] - Generating certificates and keys ...
I0407 12:46:05.842849 1429316 kubeadm.go:310] [certs] Using existing ca certificate authority
I0407 12:46:05.842866 1429316 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0407 12:46:05.929822 1429316 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0407 12:46:06.034156 1429316 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0407 12:46:06.137512 1429316 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0407 12:46:06.399738 1429316 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0407 12:46:06.658454 1429316 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0407 12:46:06.658837 1429316 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent] and IPs [10.132.0.4 127.0.0.1 ::1]
I0407 12:46:06.793515 1429316 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0407 12:46:06.793616 1429316 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent] and IPs [10.132.0.4 127.0.0.1 ::1]
I0407 12:46:07.111754 1429316 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0407 12:46:07.239104 1429316 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0407 12:46:07.374867 1429316 kubeadm.go:310] [certs] Generating "sa" key and public key
I0407 12:46:07.375054 1429316 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0407 12:46:07.516836 1429316 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0407 12:46:07.676713 1429316 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0407 12:46:08.039272 1429316 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0407 12:46:08.150766 1429316 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0407 12:46:08.340603 1429316 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0407 12:46:08.341788 1429316 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0407 12:46:08.344254 1429316 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0407 12:46:08.346695 1429316 out.go:235] - Booting up control plane ...
I0407 12:46:08.346729 1429316 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0407 12:46:08.346756 1429316 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0407 12:46:08.347211 1429316 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0407 12:46:08.372882 1429316 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0407 12:46:08.377541 1429316 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0407 12:46:08.377576 1429316 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0407 12:46:08.617762 1429316 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0407 12:46:08.617787 1429316 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0407 12:46:09.119698 1429316 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.893396ms
I0407 12:46:09.119727 1429316 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0407 12:46:14.121618 1429316 kubeadm.go:310] [api-check] The API server is healthy after 5.001918177s
I0407 12:46:14.134209 1429316 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0407 12:46:14.145166 1429316 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0407 12:46:14.166074 1429316 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0407 12:46:14.166105 1429316 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0407 12:46:14.173597 1429316 kubeadm.go:310] [bootstrap-token] Using token: p4kop0.df2qjc17ds7iaiam
I0407 12:46:14.175343 1429316 out.go:235] - Configuring RBAC rules ...
I0407 12:46:14.175389 1429316 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0407 12:46:14.178620 1429316 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0407 12:46:14.184157 1429316 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0407 12:46:14.186768 1429316 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0407 12:46:14.189495 1429316 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0407 12:46:14.193735 1429316 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0407 12:46:14.528888 1429316 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0407 12:46:14.951790 1429316 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0407 12:46:15.528465 1429316 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0407 12:46:15.529302 1429316 kubeadm.go:310]
I0407 12:46:15.529328 1429316 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0407 12:46:15.529333 1429316 kubeadm.go:310]
I0407 12:46:15.529338 1429316 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0407 12:46:15.529342 1429316 kubeadm.go:310]
I0407 12:46:15.529346 1429316 kubeadm.go:310] mkdir -p $HOME/.kube
I0407 12:46:15.529350 1429316 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0407 12:46:15.529376 1429316 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0407 12:46:15.529385 1429316 kubeadm.go:310]
I0407 12:46:15.529390 1429316 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0407 12:46:15.529394 1429316 kubeadm.go:310]
I0407 12:46:15.529398 1429316 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0407 12:46:15.529402 1429316 kubeadm.go:310]
I0407 12:46:15.529406 1429316 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0407 12:46:15.529410 1429316 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0407 12:46:15.529415 1429316 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0407 12:46:15.529422 1429316 kubeadm.go:310]
I0407 12:46:15.529428 1429316 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0407 12:46:15.529432 1429316 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0407 12:46:15.529434 1429316 kubeadm.go:310]
I0407 12:46:15.529439 1429316 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token p4kop0.df2qjc17ds7iaiam \
I0407 12:46:15.529443 1429316 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:a0218baebfbd26086bf2c1fda945fcf4b4d1b776503555f789838ba1e80aed9c \
I0407 12:46:15.529446 1429316 kubeadm.go:310] --control-plane
I0407 12:46:15.529448 1429316 kubeadm.go:310]
I0407 12:46:15.529451 1429316 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0407 12:46:15.529454 1429316 kubeadm.go:310]
I0407 12:46:15.529456 1429316 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token p4kop0.df2qjc17ds7iaiam \
I0407 12:46:15.529459 1429316 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:a0218baebfbd26086bf2c1fda945fcf4b4d1b776503555f789838ba1e80aed9c
I0407 12:46:15.532573 1429316 cni.go:84] Creating CNI manager for ""
I0407 12:46:15.532610 1429316 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0407 12:46:15.534535 1429316 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0407 12:46:15.535691 1429316 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
I0407 12:46:15.547497 1429316 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0407 12:46:15.547645 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3694858315 /etc/cni/net.d/1-k8s.conflist
I0407 12:46:15.557811 1429316 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0407 12:46:15.557870 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0407 12:46:15.557891 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent minikube.k8s.io/updated_at=2025_04_07T12_46_15_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
I0407 12:46:15.566997 1429316 ops.go:34] apiserver oom_adj: -16
I0407 12:46:15.628992 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0407 12:46:16.129805 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0407 12:46:16.629609 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0407 12:46:17.129288 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0407 12:46:17.629737 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0407 12:46:18.129916 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0407 12:46:18.629214 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0407 12:46:19.129880 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0407 12:46:19.629695 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0407 12:46:20.129764 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0407 12:46:20.197626 1429316 kubeadm.go:1113] duration metric: took 4.639807769s to wait for elevateKubeSystemPrivileges
I0407 12:46:20.197664 1429316 kubeadm.go:394] duration metric: took 14.643775896s to StartCluster
I0407 12:46:20.197703 1429316 settings.go:142] acquiring lock: {Name:mk1a74bdc4efde062e045448da0c418856eac793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:46:20.197785 1429316 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20598-1418173/kubeconfig
I0407 12:46:20.198485 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/kubeconfig: {Name:mk79daf009e4d10ee19338674231a661a076a223 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:46:20.198740 1429316 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0407 12:46:20.198900 1429316 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:true volumesnapshots:true yakd:true]
I0407 12:46:20.199009 1429316 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:46:20.199034 1429316 addons.go:69] Setting yakd=true in profile "minikube"
I0407 12:46:20.199052 1429316 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
I0407 12:46:20.199061 1429316 addons.go:238] Setting addon yakd=true in "minikube"
I0407 12:46:20.199070 1429316 addons.go:69] Setting amd-gpu-device-plugin=true in profile "minikube"
I0407 12:46:20.199083 1429316 addons.go:238] Setting addon amd-gpu-device-plugin=true in "minikube"
I0407 12:46:20.199100 1429316 host.go:66] Checking if "minikube" exists ...
I0407 12:46:20.199106 1429316 host.go:66] Checking if "minikube" exists ...
I0407 12:46:20.199249 1429316 addons.go:69] Setting cloud-spanner=true in profile "minikube"
I0407 12:46:20.199278 1429316 addons.go:238] Setting addon cloud-spanner=true in "minikube"
I0407 12:46:20.199297 1429316 host.go:66] Checking if "minikube" exists ...
I0407 12:46:20.199327 1429316 addons.go:69] Setting default-storageclass=true in profile "minikube"
I0407 12:46:20.199353 1429316 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0407 12:46:20.199883 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
I0407 12:46:20.199907 1429316 api_server.go:166] Checking apiserver status ...
I0407 12:46:20.199922 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
I0407 12:46:20.199941 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0407 12:46:20.199942 1429316 api_server.go:166] Checking apiserver status ...
I0407 12:46:20.199982 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0407 12:46:20.200038 1429316 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
I0407 12:46:20.200132 1429316 addons.go:238] Setting addon csi-hostpath-driver=true in "minikube"
I0407 12:46:20.200175 1429316 host.go:66] Checking if "minikube" exists ...
I0407 12:46:20.200269 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
I0407 12:46:20.200284 1429316 api_server.go:166] Checking apiserver status ...
I0407 12:46:20.200314 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0407 12:46:20.200885 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
I0407 12:46:20.200911 1429316 api_server.go:166] Checking apiserver status ...
I0407 12:46:20.200946 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0407 12:46:20.201033 1429316 addons.go:69] Setting gcp-auth=true in profile "minikube"
I0407 12:46:20.201064 1429316 mustload.go:65] Loading cluster: minikube
I0407 12:46:20.201270 1429316 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:46:20.202003 1429316 addons.go:69] Setting storage-provisioner=true in profile "minikube"
I0407 12:46:20.202025 1429316 addons.go:238] Setting addon storage-provisioner=true in "minikube"
I0407 12:46:20.202177 1429316 host.go:66] Checking if "minikube" exists ...
I0407 12:46:20.202879 1429316 out.go:177] * Configuring local host environment ...
I0407 12:46:20.203400 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
I0407 12:46:20.203417 1429316 api_server.go:166] Checking apiserver status ...
I0407 12:46:20.203451 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0407 12:46:20.204888 1429316 out.go:270] *
W0407 12:46:20.204905 1429316 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
W0407 12:46:20.204912 1429316 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
W0407 12:46:20.204919 1429316 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
W0407 12:46:20.204925 1429316 out.go:270] *
W0407 12:46:20.204969 1429316 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
W0407 12:46:20.204976 1429316 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
W0407 12:46:20.204981 1429316 out.go:270] *
W0407 12:46:20.205013 1429316 out.go:270] - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
W0407 12:46:20.205020 1429316 out.go:270] - sudo chown -R $USER $HOME/.kube $HOME/.minikube
W0407 12:46:20.205025 1429316 out.go:270] *
W0407 12:46:20.205032 1429316 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
I0407 12:46:20.205059 1429316 start.go:235] Will wait 6m0s for node &{Name: IP:10.132.0.4 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0407 12:46:20.205947 1429316 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
I0407 12:46:20.205967 1429316 addons.go:238] Setting addon nvidia-device-plugin=true in "minikube"
I0407 12:46:20.205997 1429316 host.go:66] Checking if "minikube" exists ...
I0407 12:46:20.206023 1429316 addons.go:69] Setting metrics-server=true in profile "minikube"
I0407 12:46:20.206045 1429316 addons.go:238] Setting addon metrics-server=true in "minikube"
I0407 12:46:20.206080 1429316 host.go:66] Checking if "minikube" exists ...
I0407 12:46:20.206445 1429316 addons.go:69] Setting registry=true in profile "minikube"
I0407 12:46:20.206466 1429316 addons.go:238] Setting addon registry=true in "minikube"
I0407 12:46:20.206547 1429316 host.go:66] Checking if "minikube" exists ...
I0407 12:46:20.206622 1429316 addons.go:69] Setting volcano=true in profile "minikube"
I0407 12:46:20.206644 1429316 out.go:177] * Verifying Kubernetes components...
I0407 12:46:20.206669 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
I0407 12:46:20.206689 1429316 api_server.go:166] Checking apiserver status ...
I0407 12:46:20.206717 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
I0407 12:46:20.206727 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0407 12:46:20.206734 1429316 api_server.go:166] Checking apiserver status ...
I0407 12:46:20.206780 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0407 12:46:20.206841 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
I0407 12:46:20.206865 1429316 api_server.go:166] Checking apiserver status ...
I0407 12:46:20.206903 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0407 12:46:20.206918 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
I0407 12:46:20.206936 1429316 api_server.go:166] Checking apiserver status ...
I0407 12:46:20.207006 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0407 12:46:20.207280 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
I0407 12:46:20.207337 1429316 api_server.go:166] Checking apiserver status ...
I0407 12:46:20.206656 1429316 addons.go:69] Setting volumesnapshots=true in profile "minikube"
I0407 12:46:20.207373 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0407 12:46:20.207390 1429316 addons.go:238] Setting addon volumesnapshots=true in "minikube"
I0407 12:46:20.207430 1429316 host.go:66] Checking if "minikube" exists ...
I0407 12:46:20.206647 1429316 addons.go:238] Setting addon volcano=true in "minikube"
I0407 12:46:20.207542 1429316 host.go:66] Checking if "minikube" exists ...
I0407 12:46:20.199062 1429316 addons.go:238] Setting addon inspektor-gadget=true in "minikube"
I0407 12:46:20.207852 1429316 host.go:66] Checking if "minikube" exists ...
I0407 12:46:20.208086 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
I0407 12:46:20.208111 1429316 api_server.go:166] Checking apiserver status ...
I0407 12:46:20.208142 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0407 12:46:20.208317 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
I0407 12:46:20.208378 1429316 api_server.go:166] Checking apiserver status ...
I0407 12:46:20.208278 1429316 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0407 12:46:20.208509 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0407 12:46:20.211997 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
I0407 12:46:20.212040 1429316 api_server.go:166] Checking apiserver status ...
I0407 12:46:20.212080 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0407 12:46:20.222010 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
I0407 12:46:20.223069 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
I0407 12:46:20.223578 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
I0407 12:46:20.224955 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
I0407 12:46:20.225989 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
I0407 12:46:20.243468 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
I0407 12:46:20.261161 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
I0407 12:46:20.243475 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
I0407 12:46:20.262094 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
I0407 12:46:20.262176 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
I0407 12:46:20.264542 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
I0407 12:46:20.264606 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
I0407 12:46:20.264844 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
I0407 12:46:20.264909 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
I0407 12:46:20.266233 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
I0407 12:46:20.266293 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
I0407 12:46:20.269905 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
I0407 12:46:20.276799 1429316 api_server.go:204] freezer state: "THAWED"
I0407 12:46:20.276835 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0407 12:46:20.278142 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
I0407 12:46:20.278955 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
I0407 12:46:20.279018 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
I0407 12:46:20.279925 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
I0407 12:46:20.282436 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
I0407 12:46:20.284367 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
ok
I0407 12:46:20.286078 1429316 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0407 12:46:20.287484 1429316 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0407 12:46:20.289453 1429316 api_server.go:204] freezer state: "THAWED"
I0407 12:46:20.289485 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0407 12:46:20.291042 1429316 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0407 12:46:20.292332 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
I0407 12:46:20.292410 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
I0407 12:46:20.293848 1429316 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0407 12:46:20.294863 1429316 api_server.go:204] freezer state: "THAWED"
I0407 12:46:20.294880 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
ok
I0407 12:46:20.294889 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0407 12:46:20.295689 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
I0407 12:46:20.295875 1429316 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0407 12:46:20.296807 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
I0407 12:46:20.296874 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
I0407 12:46:20.297128 1429316 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
I0407 12:46:20.297166 1429316 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0407 12:46:20.297339 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3526351157 /etc/kubernetes/addons/yakd-ns.yaml
I0407 12:46:20.297496 1429316 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0407 12:46:20.299046 1429316 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0407 12:46:20.300004 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
ok
I0407 12:46:20.300028 1429316 host.go:66] Checking if "minikube" exists ...
I0407 12:46:20.301485 1429316 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0407 12:46:20.302071 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
I0407 12:46:20.302142 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
I0407 12:46:20.303806 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
I0407 12:46:20.303862 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
I0407 12:46:20.303964 1429316 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0407 12:46:20.304170 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
I0407 12:46:20.304219 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
I0407 12:46:20.304379 1429316 api_server.go:204] freezer state: "THAWED"
I0407 12:46:20.304394 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0407 12:46:20.305346 1429316 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0407 12:46:20.305381 1429316 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0407 12:46:20.305539 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3653908400 /etc/kubernetes/addons/rbac-external-attacher.yaml
I0407 12:46:20.306131 1429316 api_server.go:204] freezer state: "THAWED"
I0407 12:46:20.306159 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0407 12:46:20.309295 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
I0407 12:46:20.309372 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
ok
I0407 12:46:20.310206 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
I0407 12:46:20.312174 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
ok
I0407 12:46:20.313237 1429316 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
I0407 12:46:20.314436 1429316 out.go:177] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I0407 12:46:20.319103 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
I0407 12:46:20.319175 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
I0407 12:46:20.319721 1429316 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0407 12:46:20.321916 1429316 api_server.go:204] freezer state: "THAWED"
I0407 12:46:20.321946 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0407 12:46:20.322334 1429316 out.go:177] - Using image docker.io/registry:2.8.3
I0407 12:46:20.322678 1429316 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I0407 12:46:20.322713 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I0407 12:46:20.322868 1429316 api_server.go:204] freezer state: "THAWED"
I0407 12:46:20.322897 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0407 12:46:20.323017 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube252749164 /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I0407 12:46:20.324672 1429316 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
I0407 12:46:20.324696 1429316 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
I0407 12:46:20.324702 1429316 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0407 12:46:20.324712 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0407 12:46:20.324836 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1922338861 /etc/kubernetes/addons/yakd-sa.yaml
I0407 12:46:20.324992 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2274747492 /etc/kubernetes/addons/registry-rc.yaml
I0407 12:46:20.326202 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
I0407 12:46:20.326256 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
I0407 12:46:20.326328 1429316 api_server.go:204] freezer state: "THAWED"
I0407 12:46:20.326347 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0407 12:46:20.327053 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
ok
I0407 12:46:20.327340 1429316 api_server.go:204] freezer state: "THAWED"
I0407 12:46:20.327365 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0407 12:46:20.327998 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
ok
I0407 12:46:20.329088 1429316 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.11.0
I0407 12:46:20.330035 1429316 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.30
I0407 12:46:20.332248 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
ok
I0407 12:46:20.332465 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
ok
I0407 12:46:20.334867 1429316 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
I0407 12:46:20.334922 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0407 12:46:20.335101 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1711987977 /etc/kubernetes/addons/deployment.yaml
I0407 12:46:20.336209 1429316 api_server.go:204] freezer state: "THAWED"
I0407 12:46:20.336234 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0407 12:46:20.336268 1429316 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
I0407 12:46:20.336319 1429316 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0407 12:46:20.336931 1429316 api_server.go:204] freezer state: "THAWED"
I0407 12:46:20.336954 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0407 12:46:20.340717 1429316 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
I0407 12:46:20.340791 1429316 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
I0407 12:46:20.340948 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1569781149 /etc/kubernetes/addons/ig-crd.yaml
I0407 12:46:20.340978 1429316 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0407 12:46:20.341009 1429316 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
I0407 12:46:20.341016 1429316 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
I0407 12:46:20.341047 1429316 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
I0407 12:46:20.340768 1429316 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.11.0
I0407 12:46:20.345492 1429316 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.11.0
I0407 12:46:20.345582 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
ok
I0407 12:46:20.345760 1429316 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
I0407 12:46:20.345786 1429316 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0407 12:46:20.345907 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2676008935 /etc/kubernetes/addons/yakd-crb.yaml
I0407 12:46:20.346908 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I0407 12:46:20.350669 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
ok
I0407 12:46:20.350951 1429316 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0407 12:46:20.350997 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (480278 bytes)
I0407 12:46:20.352791 1429316 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
I0407 12:46:20.356470 1429316 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0407 12:46:20.356511 1429316 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0407 12:46:20.357258 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3825462376 /etc/kubernetes/addons/volcano-deployment.yaml
I0407 12:46:20.357984 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1913311062 /etc/kubernetes/addons/rbac-hostpath.yaml
I0407 12:46:20.358967 1429316 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0407 12:46:20.359621 1429316 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
I0407 12:46:20.359658 1429316 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0407 12:46:20.359664 1429316 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0407 12:46:20.359691 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0407 12:46:20.359845 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2952795493 /etc/kubernetes/addons/registry-svc.yaml
I0407 12:46:20.360495 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube189832616 /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0407 12:46:20.361524 1429316 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0407 12:46:20.361558 1429316 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0407 12:46:20.365172 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4120191610 /etc/kubernetes/addons/metrics-apiservice.yaml
I0407 12:46:20.374944 1429316 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
I0407 12:46:20.374992 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
I0407 12:46:20.375186 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2179279931 /etc/kubernetes/addons/ig-deployment.yaml
I0407 12:46:20.379041 1429316 api_server.go:204] freezer state: "THAWED"
I0407 12:46:20.379374 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0407 12:46:20.380385 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0407 12:46:20.380560 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3051500119 /etc/kubernetes/addons/storage-provisioner.yaml
I0407 12:46:20.385196 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
ok
I0407 12:46:20.387870 1429316 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0407 12:46:20.388702 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0407 12:46:20.390037 1429316 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
I0407 12:46:20.390067 1429316 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0407 12:46:20.390187 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube31446616 /etc/kubernetes/addons/yakd-svc.yaml
I0407 12:46:20.390764 1429316 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0407 12:46:20.390800 1429316 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0407 12:46:20.391569 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3779181310 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0407 12:46:20.394337 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0407 12:46:20.398769 1429316 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0407 12:46:20.398806 1429316 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0407 12:46:20.398933 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube906689499 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0407 12:46:20.402373 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0407 12:46:20.402640 1429316 api_server.go:204] freezer state: "THAWED"
I0407 12:46:20.402664 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0407 12:46:20.405039 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I0407 12:46:20.408207 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
ok
I0407 12:46:20.409282 1429316 addons.go:238] Setting addon default-storageclass=true in "minikube"
I0407 12:46:20.409335 1429316 host.go:66] Checking if "minikube" exists ...
I0407 12:46:20.410204 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0407 12:46:20.411381 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
I0407 12:46:20.411413 1429316 api_server.go:166] Checking apiserver status ...
I0407 12:46:20.411457 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0407 12:46:20.416651 1429316 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0407 12:46:20.416753 1429316 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0407 12:46:20.416972 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2826717481 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0407 12:46:20.419552 1429316 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
I0407 12:46:20.419587 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0407 12:46:20.419724 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3788447769 /etc/kubernetes/addons/registry-proxy.yaml
I0407 12:46:20.421654 1429316 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0407 12:46:20.421683 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0407 12:46:20.422435 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1998580135 /etc/kubernetes/addons/metrics-server-deployment.yaml
I0407 12:46:20.425248 1429316 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
I0407 12:46:20.425278 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0407 12:46:20.425416 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3456691057 /etc/kubernetes/addons/yakd-dp.yaml
I0407 12:46:20.470027 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0407 12:46:20.471917 1429316 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0407 12:46:20.471958 1429316 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0407 12:46:20.472122 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2976229442 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0407 12:46:20.472656 1429316 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0407 12:46:20.472682 1429316 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0407 12:46:20.472807 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube422639263 /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0407 12:46:20.474651 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0407 12:46:20.497912 1429316 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0407 12:46:20.497967 1429316 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0407 12:46:20.498143 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3106965212 /etc/kubernetes/addons/metrics-server-rbac.yaml
I0407 12:46:20.514273 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
I0407 12:46:20.536535 1429316 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0407 12:46:20.536573 1429316 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0407 12:46:20.536697 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3748851246 /etc/kubernetes/addons/rbac-external-resizer.yaml
I0407 12:46:20.558030 1429316 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0407 12:46:20.558071 1429316 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0407 12:46:20.558226 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3460275038 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0407 12:46:20.583644 1429316 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0407 12:46:20.583701 1429316 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0407 12:46:20.583856 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4292587570 /etc/kubernetes/addons/metrics-server-service.yaml
I0407 12:46:20.602264 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0407 12:46:20.613494 1429316 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0407 12:46:20.613554 1429316 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0407 12:46:20.613690 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube210220550 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0407 12:46:20.697780 1429316 exec_runner.go:51] Run: sudo systemctl start kubelet
I0407 12:46:20.710202 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
I0407 12:46:20.710292 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
I0407 12:46:20.726957 1429316 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0407 12:46:20.727004 1429316 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0407 12:46:20.727156 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1419895819 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0407 12:46:20.758069 1429316 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent" to be "Ready" ...
I0407 12:46:20.760314 1429316 node_ready.go:49] node "ubuntu-20-agent" has status "Ready":"True"
I0407 12:46:20.760337 1429316 node_ready.go:38] duration metric: took 2.226937ms for node "ubuntu-20-agent" to be "Ready" ...
I0407 12:46:20.760348 1429316 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0407 12:46:20.776617 1429316 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0407 12:46:20.776664 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0407 12:46:20.779959 1429316 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-86df5" in "kube-system" namespace to be "Ready" ...
I0407 12:46:20.786355 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3871166456 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0407 12:46:20.823708 1429316 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0407 12:46:20.823745 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0407 12:46:20.823889 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3128803471 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0407 12:46:20.824070 1429316 api_server.go:204] freezer state: "THAWED"
I0407 12:46:20.824088 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0407 12:46:20.831089 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
ok
I0407 12:46:20.831141 1429316 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0407 12:46:20.831160 1429316 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
I0407 12:46:20.831168 1429316 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
I0407 12:46:20.831207 1429316 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
I0407 12:46:20.856878 1429316 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0407 12:46:20.856920 1429316 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0407 12:46:20.859857 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube168228944 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0407 12:46:20.883053 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0407 12:46:20.886655 1429316 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0407 12:46:20.886842 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2746251177 /etc/kubernetes/addons/storageclass.yaml
I0407 12:46:20.916503 1429316 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0407 12:46:20.916548 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0407 12:46:20.916700 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4173374182 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0407 12:46:20.925691 1429316 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
I0407 12:46:20.958076 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0407 12:46:20.987499 1429316 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0407 12:46:20.987568 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0407 12:46:20.987741 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2420038711 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0407 12:46:21.040807 1429316 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0407 12:46:21.040860 1429316 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0407 12:46:21.041041 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3268416938 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0407 12:46:21.136865 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0407 12:46:21.409763 1429316 addons.go:479] Verifying addon registry=true in "minikube"
I0407 12:46:21.412264 1429316 out.go:177] * Verifying registry addon...
I0407 12:46:21.415321 1429316 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0407 12:46:21.418713 1429316 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0407 12:46:21.418736 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:21.433549 1429316 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
I0407 12:46:21.570206 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.159952141s)
I0407 12:46:21.648841 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.243753231s)
I0407 12:46:21.711886 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.24178473s)
I0407 12:46:21.717694 1429316 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube service yakd-dashboard -n yakd-dashboard
I0407 12:46:21.720411 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.118083477s)
I0407 12:46:21.720456 1429316 addons.go:479] Verifying addon metrics-server=true in "minikube"
I0407 12:46:21.922074 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:22.419286 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:22.595941 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.712830875s)
W0407 12:46:22.595992 1429316 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0407 12:46:22.596030 1429316 retry.go:31] will retry after 202.751969ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0407 12:46:22.786098 1429316 pod_ready.go:103] pod "amd-gpu-device-plugin-86df5" in "kube-system" namespace has status "Ready":"False"
I0407 12:46:22.799303 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0407 12:46:22.919554 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:23.425450 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:23.456836 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.319887875s)
I0407 12:46:23.456881 1429316 addons.go:479] Verifying addon csi-hostpath-driver=true in "minikube"
I0407 12:46:23.462996 1429316 out.go:177] * Verifying csi-hostpath-driver addon...
I0407 12:46:23.467517 1429316 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0407 12:46:23.500910 1429316 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0407 12:46:23.500946 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:23.678635 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.276218032s)
I0407 12:46:23.919571 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:23.987515 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:24.419253 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:24.471440 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:24.919038 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:24.971484 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:25.285637 1429316 pod_ready.go:93] pod "amd-gpu-device-plugin-86df5" in "kube-system" namespace has status "Ready":"True"
I0407 12:46:25.285663 1429316 pod_ready.go:82] duration metric: took 4.505662003s for pod "amd-gpu-device-plugin-86df5" in "kube-system" namespace to be "Ready" ...
I0407 12:46:25.285673 1429316 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-28dsp" in "kube-system" namespace to be "Ready" ...
I0407 12:46:25.419494 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:25.521115 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:25.533187 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.733792804s)
I0407 12:46:25.918839 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:25.971084 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:26.419692 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:26.472363 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:26.919941 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:27.020780 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:27.108165 1429316 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0407 12:46:27.108484 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1890754882 /var/lib/minikube/google_application_credentials.json
I0407 12:46:27.119734 1429316 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0407 12:46:27.119899 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2219183012 /var/lib/minikube/google_cloud_project
I0407 12:46:27.131325 1429316 addons.go:238] Setting addon gcp-auth=true in "minikube"
I0407 12:46:27.131402 1429316 host.go:66] Checking if "minikube" exists ...
I0407 12:46:27.132217 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
I0407 12:46:27.132247 1429316 api_server.go:166] Checking apiserver status ...
I0407 12:46:27.132286 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0407 12:46:27.152075 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
I0407 12:46:27.163123 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
I0407 12:46:27.163212 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
I0407 12:46:27.172494 1429316 api_server.go:204] freezer state: "THAWED"
I0407 12:46:27.172531 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0407 12:46:27.177380 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
ok
I0407 12:46:27.177462 1429316 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
I0407 12:46:27.180770 1429316 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I0407 12:46:27.182360 1429316 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I0407 12:46:27.183717 1429316 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0407 12:46:27.183761 1429316 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0407 12:46:27.183920 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1049495724 /etc/kubernetes/addons/gcp-auth-ns.yaml
I0407 12:46:27.196439 1429316 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0407 12:46:27.196488 1429316 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0407 12:46:27.196686 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube693064940 /etc/kubernetes/addons/gcp-auth-service.yaml
I0407 12:46:27.206666 1429316 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0407 12:46:27.206702 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0407 12:46:27.206855 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube58906347 /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0407 12:46:27.218711 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0407 12:46:27.291476 1429316 pod_ready.go:93] pod "coredns-668d6bf9bc-28dsp" in "kube-system" namespace has status "Ready":"True"
I0407 12:46:27.291502 1429316 pod_ready.go:82] duration metric: took 2.005821765s for pod "coredns-668d6bf9bc-28dsp" in "kube-system" namespace to be "Ready" ...
I0407 12:46:27.291519 1429316 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-c67zv" in "kube-system" namespace to be "Ready" ...
I0407 12:46:27.295922 1429316 pod_ready.go:93] pod "coredns-668d6bf9bc-c67zv" in "kube-system" namespace has status "Ready":"True"
I0407 12:46:27.295949 1429316 pod_ready.go:82] duration metric: took 4.420137ms for pod "coredns-668d6bf9bc-c67zv" in "kube-system" namespace to be "Ready" ...
I0407 12:46:27.295962 1429316 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
I0407 12:46:27.299925 1429316 pod_ready.go:93] pod "etcd-ubuntu-20-agent" in "kube-system" namespace has status "Ready":"True"
I0407 12:46:27.299965 1429316 pod_ready.go:82] duration metric: took 3.992923ms for pod "etcd-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
I0407 12:46:27.299978 1429316 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
I0407 12:46:27.419975 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:27.471432 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:27.920057 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:27.962665 1429316 addons.go:479] Verifying addon gcp-auth=true in "minikube"
I0407 12:46:27.965706 1429316 out.go:177] * Verifying gcp-auth addon...
I0407 12:46:27.968051 1429316 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0407 12:46:28.020196 1429316 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0407 12:46:28.020499 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:28.420045 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:28.471286 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:28.805902 1429316 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent" in "kube-system" namespace has status "Ready":"True"
I0407 12:46:28.805928 1429316 pod_ready.go:82] duration metric: took 1.505941321s for pod "kube-apiserver-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
I0407 12:46:28.805938 1429316 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
I0407 12:46:28.811208 1429316 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent" in "kube-system" namespace has status "Ready":"True"
I0407 12:46:28.811254 1429316 pod_ready.go:82] duration metric: took 5.307688ms for pod "kube-controller-manager-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
I0407 12:46:28.811269 1429316 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4ktb9" in "kube-system" namespace to be "Ready" ...
I0407 12:46:28.889612 1429316 pod_ready.go:93] pod "kube-proxy-4ktb9" in "kube-system" namespace has status "Ready":"True"
I0407 12:46:28.889639 1429316 pod_ready.go:82] duration metric: took 78.35951ms for pod "kube-proxy-4ktb9" in "kube-system" namespace to be "Ready" ...
I0407 12:46:28.889652 1429316 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
I0407 12:46:28.919192 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:29.020417 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:29.289605 1429316 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent" in "kube-system" namespace has status "Ready":"True"
I0407 12:46:29.289637 1429316 pod_ready.go:82] duration metric: took 399.974892ms for pod "kube-scheduler-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
I0407 12:46:29.289653 1429316 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-qtjqk" in "kube-system" namespace to be "Ready" ...
I0407 12:46:29.419981 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:29.471030 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:29.918490 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:29.971448 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:30.419178 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:30.471563 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:30.918873 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:31.020301 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:31.296406 1429316 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-qtjqk" in "kube-system" namespace has status "Ready":"False"
I0407 12:46:31.419473 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:31.471850 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:31.919476 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:31.971849 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:32.419096 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:32.471663 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:32.919835 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:32.971160 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:33.419000 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:33.519607 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:33.794578 1429316 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-qtjqk" in "kube-system" namespace has status "Ready":"False"
I0407 12:46:33.918521 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:33.989387 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:34.419833 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:34.470704 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:34.919739 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:35.020689 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:35.295351 1429316 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-qtjqk" in "kube-system" namespace has status "Ready":"True"
I0407 12:46:35.295382 1429316 pod_ready.go:82] duration metric: took 6.005719807s for pod "nvidia-device-plugin-daemonset-qtjqk" in "kube-system" namespace to be "Ready" ...
I0407 12:46:35.295394 1429316 pod_ready.go:39] duration metric: took 14.53503087s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0407 12:46:35.295421 1429316 api_server.go:52] waiting for apiserver process to appear ...
I0407 12:46:35.295487 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0407 12:46:35.314133 1429316 api_server.go:72] duration metric: took 15.109039821s to wait for apiserver process to appear ...
I0407 12:46:35.314163 1429316 api_server.go:88] waiting for apiserver healthz status ...
I0407 12:46:35.314188 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0407 12:46:35.317933 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
ok
I0407 12:46:35.318854 1429316 api_server.go:141] control plane version: v1.32.2
I0407 12:46:35.318881 1429316 api_server.go:131] duration metric: took 4.708338ms to wait for apiserver health ...
I0407 12:46:35.318889 1429316 system_pods.go:43] waiting for kube-system pods to appear ...
I0407 12:46:35.322611 1429316 system_pods.go:59] 17 kube-system pods found
I0407 12:46:35.322656 1429316 system_pods.go:61] "amd-gpu-device-plugin-86df5" [ba9ab47c-61f0-4711-959e-29c976ef7c89] Running
I0407 12:46:35.322666 1429316 system_pods.go:61] "coredns-668d6bf9bc-28dsp" [c3edd2f1-75f3-4345-9544-93c2a6f0f5d3] Running
I0407 12:46:35.322677 1429316 system_pods.go:61] "csi-hostpath-attacher-0" [8f7840f4-1626-4a29-be20-6998152854a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0407 12:46:35.322690 1429316 system_pods.go:61] "csi-hostpath-resizer-0" [06f1b8f1-d561-44df-8d0e-e5191281a47f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0407 12:46:35.322700 1429316 system_pods.go:61] "csi-hostpathplugin-n7jq8" [7f9c7966-52c5-4bcb-84c7-1915efadd81b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0407 12:46:35.322708 1429316 system_pods.go:61] "etcd-ubuntu-20-agent" [13ea58ff-509e-403d-90ae-292ab15ea901] Running
I0407 12:46:35.322712 1429316 system_pods.go:61] "kube-apiserver-ubuntu-20-agent" [8832ae71-7c9c-4d9e-a74d-d2dc87fcc0a1] Running
I0407 12:46:35.322718 1429316 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent" [73ba7bcb-e73b-4403-a7d7-9532589d0ab9] Running
I0407 12:46:35.322723 1429316 system_pods.go:61] "kube-proxy-4ktb9" [f218d86a-31ef-4897-b9e4-d53c0a6eb365] Running
I0407 12:46:35.322728 1429316 system_pods.go:61] "kube-scheduler-ubuntu-20-agent" [58f3fb78-0ec4-41c5-a20f-9a0df3c2f9ce] Running
I0407 12:46:35.322741 1429316 system_pods.go:61] "metrics-server-7fbb699795-kfmft" [723d2ed5-e3cb-4cc3-80d7-62e3c337502a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0407 12:46:35.322746 1429316 system_pods.go:61] "nvidia-device-plugin-daemonset-qtjqk" [861c99d3-8db6-4690-9b9a-9445eb29a1b1] Running
I0407 12:46:35.322754 1429316 system_pods.go:61] "registry-6c88467877-kwnrb" [4fbcb06c-10f2-48eb-ae63-5c09b49e6099] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0407 12:46:35.322762 1429316 system_pods.go:61] "registry-proxy-gpv45" [1ee0f741-4f8b-4063-832c-bfc311b610aa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0407 12:46:35.322772 1429316 system_pods.go:61] "snapshot-controller-68b874b76f-7465t" [bacd4eea-22af-4b2e-a3c3-c11adcd9d06e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0407 12:46:35.322782 1429316 system_pods.go:61] "snapshot-controller-68b874b76f-bnf6p" [36a09b5c-f06d-41d9-b331-82f98e9152c3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0407 12:46:35.322787 1429316 system_pods.go:61] "storage-provisioner" [18b8d7ec-1526-45c5-8660-6ab5bcb5dde2] Running
I0407 12:46:35.322795 1429316 system_pods.go:74] duration metric: took 3.900184ms to wait for pod list to return data ...
I0407 12:46:35.322803 1429316 default_sa.go:34] waiting for default service account to be created ...
I0407 12:46:35.325143 1429316 default_sa.go:45] found service account: "default"
I0407 12:46:35.325165 1429316 default_sa.go:55] duration metric: took 2.356952ms for default service account to be created ...
I0407 12:46:35.325173 1429316 system_pods.go:116] waiting for k8s-apps to be running ...
I0407 12:46:35.328166 1429316 system_pods.go:86] 17 kube-system pods found
I0407 12:46:35.328197 1429316 system_pods.go:89] "amd-gpu-device-plugin-86df5" [ba9ab47c-61f0-4711-959e-29c976ef7c89] Running
I0407 12:46:35.328204 1429316 system_pods.go:89] "coredns-668d6bf9bc-28dsp" [c3edd2f1-75f3-4345-9544-93c2a6f0f5d3] Running
I0407 12:46:35.328211 1429316 system_pods.go:89] "csi-hostpath-attacher-0" [8f7840f4-1626-4a29-be20-6998152854a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0407 12:46:35.328218 1429316 system_pods.go:89] "csi-hostpath-resizer-0" [06f1b8f1-d561-44df-8d0e-e5191281a47f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0407 12:46:35.328232 1429316 system_pods.go:89] "csi-hostpathplugin-n7jq8" [7f9c7966-52c5-4bcb-84c7-1915efadd81b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0407 12:46:35.328239 1429316 system_pods.go:89] "etcd-ubuntu-20-agent" [13ea58ff-509e-403d-90ae-292ab15ea901] Running
I0407 12:46:35.328243 1429316 system_pods.go:89] "kube-apiserver-ubuntu-20-agent" [8832ae71-7c9c-4d9e-a74d-d2dc87fcc0a1] Running
I0407 12:46:35.328248 1429316 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent" [73ba7bcb-e73b-4403-a7d7-9532589d0ab9] Running
I0407 12:46:35.328251 1429316 system_pods.go:89] "kube-proxy-4ktb9" [f218d86a-31ef-4897-b9e4-d53c0a6eb365] Running
I0407 12:46:35.328262 1429316 system_pods.go:89] "kube-scheduler-ubuntu-20-agent" [58f3fb78-0ec4-41c5-a20f-9a0df3c2f9ce] Running
I0407 12:46:35.328271 1429316 system_pods.go:89] "metrics-server-7fbb699795-kfmft" [723d2ed5-e3cb-4cc3-80d7-62e3c337502a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0407 12:46:35.328275 1429316 system_pods.go:89] "nvidia-device-plugin-daemonset-qtjqk" [861c99d3-8db6-4690-9b9a-9445eb29a1b1] Running
I0407 12:46:35.328280 1429316 system_pods.go:89] "registry-6c88467877-kwnrb" [4fbcb06c-10f2-48eb-ae63-5c09b49e6099] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0407 12:46:35.328289 1429316 system_pods.go:89] "registry-proxy-gpv45" [1ee0f741-4f8b-4063-832c-bfc311b610aa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0407 12:46:35.328300 1429316 system_pods.go:89] "snapshot-controller-68b874b76f-7465t" [bacd4eea-22af-4b2e-a3c3-c11adcd9d06e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0407 12:46:35.328315 1429316 system_pods.go:89] "snapshot-controller-68b874b76f-bnf6p" [36a09b5c-f06d-41d9-b331-82f98e9152c3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0407 12:46:35.328320 1429316 system_pods.go:89] "storage-provisioner" [18b8d7ec-1526-45c5-8660-6ab5bcb5dde2] Running
I0407 12:46:35.328331 1429316 system_pods.go:126] duration metric: took 3.151221ms to wait for k8s-apps to be running ...
I0407 12:46:35.328339 1429316 system_svc.go:44] waiting for kubelet service to be running ....
I0407 12:46:35.328391 1429316 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0407 12:46:35.342621 1429316 system_svc.go:56] duration metric: took 14.266686ms WaitForService to wait for kubelet
I0407 12:46:35.342652 1429316 kubeadm.go:582] duration metric: took 15.137567518s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0407 12:46:35.342672 1429316 node_conditions.go:102] verifying NodePressure condition ...
I0407 12:46:35.345647 1429316 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0407 12:46:35.345689 1429316 node_conditions.go:123] node cpu capacity is 8
I0407 12:46:35.345708 1429316 node_conditions.go:105] duration metric: took 3.029456ms to run NodePressure ...
I0407 12:46:35.345725 1429316 start.go:241] waiting for startup goroutines ...
I0407 12:46:35.418575 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:35.471738 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:35.919460 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:35.971459 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:36.418927 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:36.470944 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:36.920012 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:36.971236 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:37.419625 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:37.471551 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:37.919187 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:37.971281 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:38.419700 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:38.471414 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:38.919826 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:38.971034 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:39.419257 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:39.471577 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:39.919763 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:39.970822 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:40.419580 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:40.471764 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:40.919389 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:40.971543 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:41.418325 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:41.471154 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:41.919369 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:41.971517 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:42.419213 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:42.471390 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:42.919024 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:46:43.020384 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:43.419813 1429316 kapi.go:107] duration metric: took 22.004486403s to wait for kubernetes.io/minikube-addons=registry ...
I0407 12:46:43.471031 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:43.972893 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:44.472004 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:44.971721 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:45.471738 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:45.972198 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:46.472443 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:46.972278 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:47.483667 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:47.971419 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:48.472169 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:48.976645 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:49.471072 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:49.971622 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:50.471297 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:50.972415 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:51.471308 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:51.972434 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:52.471555 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:52.975728 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:53.471488 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:53.971405 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:54.471915 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:54.972725 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:55.471662 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:56.020761 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:56.471703 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:56.972347 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:57.471091 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:57.972508 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:58.471079 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:58.972451 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:59.471337 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:46:59.972044 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:47:00.471100 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:47:00.972307 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:47:01.472123 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:47:01.972205 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:47:02.472657 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:47:02.972119 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:47:03.517910 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:47:03.972052 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:47:04.472123 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:47:04.972034 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:47:05.471642 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:47:05.971701 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:47:06.471445 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:47:06.971897 1429316 kapi.go:107] duration metric: took 43.504396595s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0407 12:47:49.972271 1429316 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0407 12:47:49.972299 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:47:50.471070 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:47:50.971560 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:47:51.472444 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:47:51.971704 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:47:52.472395 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:47:52.977847 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:47:53.471523 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:47:53.972000 1429316 kapi.go:107] duration metric: took 1m26.003943819s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0407 12:47:53.973797 1429316 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
I0407 12:47:53.975209 1429316 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0407 12:47:53.976604 1429316 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0407 12:47:53.978619 1429316 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, inspektor-gadget, yakd, metrics-server, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
I0407 12:47:53.980134 1429316 addons.go:514] duration metric: took 1m33.781240974s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin cloud-spanner storage-provisioner default-storageclass inspektor-gadget yakd metrics-server volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
I0407 12:47:53.980187 1429316 start.go:246] waiting for cluster config update ...
I0407 12:47:53.980213 1429316 start.go:255] writing updated cluster config ...
I0407 12:47:53.980556 1429316 exec_runner.go:51] Run: rm -f paused
I0407 12:47:54.030053 1429316 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
I0407 12:47:54.031911 1429316 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
==> Docker <==
-- Logs begin at Fri 2025-02-07 00:17:37 UTC, end at Mon 2025-04-07 12:53:54 UTC. --
Apr 07 12:47:36 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:47:36.759946538Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:47:43 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:47:43.232636667Z" level=warning msg="reference for unknown type: " digest="sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86" remote="docker.io/volcanosh/vc-scheduler@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86"
Apr 07 12:47:43 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:47:43.744910893Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:47:43 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:47:43.747018673Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:47:50 ubuntu-20-agent cri-dockerd[1429899]: time="2025-04-07T12:47:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1e7d13c91330dcf53cae5ee7728e5b9a824e936c78a34d13fd9b3b31cde6e35a/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Apr 07 12:47:50 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:47:50.493780846Z" level=warning msg="reference for unknown type: " digest="sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7"
Apr 07 12:47:52 ubuntu-20-agent cri-dockerd[1429899]: time="2025-04-07T12:47:52Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7"
Apr 07 12:48:18 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:48:18.226947760Z" level=warning msg="reference for unknown type: " digest="sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3" remote="docker.io/volcanosh/vc-controller-manager@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3"
Apr 07 12:48:18 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:48:18.741344775Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:48:18 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:48:18.743579696Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:48:25 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:48:25.226866414Z" level=warning msg="reference for unknown type: " digest="sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86" remote="docker.io/volcanosh/vc-scheduler@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86"
Apr 07 12:48:25 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:48:25.737717502Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:48:25 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:48:25.739857217Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:49:41 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:49:41.236422558Z" level=warning msg="reference for unknown type: " digest="sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3" remote="docker.io/volcanosh/vc-controller-manager@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3"
Apr 07 12:49:42 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:49:42.056850510Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:49:42 ubuntu-20-agent cri-dockerd[1429899]: time="2025-04-07T12:49:42Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3: docker.io/volcanosh/vc-controller-manager@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3: Pulling from volcanosh/vc-controller-manager"
Apr 07 12:49:55 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:49:55.226083604Z" level=warning msg="reference for unknown type: " digest="sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86" remote="docker.io/volcanosh/vc-scheduler@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86"
Apr 07 12:49:55 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:49:55.738260649Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:49:55 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:49:55.740361300Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:52:32 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:52:32.229990470Z" level=warning msg="reference for unknown type: " digest="sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3" remote="docker.io/volcanosh/vc-controller-manager@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3"
Apr 07 12:52:33 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:52:33.045353202Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:52:33 ubuntu-20-agent cri-dockerd[1429899]: time="2025-04-07T12:52:33Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3: docker.io/volcanosh/vc-controller-manager@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3: Pulling from volcanosh/vc-controller-manager"
Apr 07 12:52:42 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:52:42.226132974Z" level=warning msg="reference for unknown type: " digest="sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86" remote="docker.io/volcanosh/vc-scheduler@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86"
Apr 07 12:52:43 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:52:43.041209701Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:52:43 ubuntu-20-agent cri-dockerd[1429899]: time="2025-04-07T12:52:43Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86: docker.io/volcanosh/vc-scheduler@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86: Pulling from volcanosh/vc-scheduler"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
43612f6a057cd gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7 6 minutes ago Running gcp-auth 0 1e7d13c91330d gcp-auth-cd9db85c-jmrjf
10fae591b8f52 volcanosh/vc-webhook-manager@sha256:2ceea91a5f05a366955f20cb1ab266b4732f906a205cb2e3f5930cf93335aeee 6 minutes ago Running admission 0 1bf5b4675c5d6 volcano-admission-75d8f6b5c-pldpl
d8d4df3245c1b registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f 6 minutes ago Running csi-snapshotter 0 8742f0500ba41 csi-hostpathplugin-n7jq8
9e774f36f36c9 registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 6 minutes ago Running csi-provisioner 0 8742f0500ba41 csi-hostpathplugin-n7jq8
14093b9eed3cd registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0 6 minutes ago Running liveness-probe 0 8742f0500ba41 csi-hostpathplugin-n7jq8
84f7a19f6f36c registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 6 minutes ago Running hostpath 0 8742f0500ba41 csi-hostpathplugin-n7jq8
4fa315740091f registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c 6 minutes ago Running node-driver-registrar 0 8742f0500ba41 csi-hostpathplugin-n7jq8
647294f13c314 registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 6 minutes ago Running csi-resizer 0 050e14ae928f5 csi-hostpath-resizer-0
ba7ce3888e0c5 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c 6 minutes ago Running csi-external-health-monitor-controller 0 8742f0500ba41 csi-hostpathplugin-n7jq8
2b2cd10e8243c registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b 6 minutes ago Running csi-attacher 0 ddc86519dee5d csi-hostpath-attacher-0
fd738334ee3ba volcanosh/vc-webhook-manager@sha256:2ceea91a5f05a366955f20cb1ab266b4732f906a205cb2e3f5930cf93335aeee 7 minutes ago Exited main 0 f1d6f87bba138 volcano-admission-init-4bqwh
a247be89a521f registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 7 minutes ago Running volume-snapshot-controller 0 10f25a02c6c25 snapshot-controller-68b874b76f-7465t
6320ac7c7873b registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 7 minutes ago Running volume-snapshot-controller 0 cee952ace26d6 snapshot-controller-68b874b76f-bnf6p
fdd971918b4ba ghcr.io/inspektor-gadget/inspektor-gadget@sha256:886412e63d6c580c50b3b7b59eee709a870768a7b5d0d9c27d66fe2a32c555e0 7 minutes ago Running gadget 0 87e95256e8189 gadget-qfz76
3cde9dbb13733 marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 7 minutes ago Running yakd 0 17c37c1766f9a yakd-dashboard-575dd5996b-qf5qb
a5658dd8aadd5 registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 7 minutes ago Running metrics-server 0 4e9eb194ca166 metrics-server-7fbb699795-kfmft
ae8585906fcb9 gcr.io/k8s-minikube/kube-registry-proxy@sha256:60ab3508367ad093b4b891231572577371a29f838d61e64d7f7d093d961c862c 7 minutes ago Running registry-proxy 0 2d13057c5cdc3 registry-proxy-gpv45
3a2cbb8e4e131 registry@sha256:319881be2ee9e345d5837d15842a04268de6a139e23be42654fc7664fc6eaf52 7 minutes ago Running registry 0 8817996a24643 registry-6c88467877-kwnrb
b7c45376b2746 gcr.io/cloud-spanner-emulator/emulator@sha256:a9c7274e55bba48a4f5bec813a11087d9f2e3a3f7e583dae9873aae2ec17f125 7 minutes ago Running cloud-spanner-emulator 0 96406b22e6497 cloud-spanner-emulator-cc9755fc7-8d2gd
08a692aaf85f6 nvcr.io/nvidia/k8s-device-plugin@sha256:7089559ce6153018806857f5049085bae15b3bf6f1c8bd19d8b12f707d087dea 7 minutes ago Running nvidia-device-plugin-ctr 0 158136d890242 nvidia-device-plugin-daemonset-qtjqk
28e171950f5a7 rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 7 minutes ago Running amd-gpu-device-plugin 0 0cccbb3588406 amd-gpu-device-plugin-86df5
9367d6480bcd3 6e38f40d628db 7 minutes ago Running storage-provisioner 0 4e46329d24f22 storage-provisioner
e6de974948a2b f1332858868e1 7 minutes ago Running kube-proxy 0 7cbe52af79cd0 kube-proxy-4ktb9
634b0f31bf167 c69fa2e9cbf5f 7 minutes ago Running coredns 0 fb409e8883373 coredns-668d6bf9bc-28dsp
8e962b9f09173 d8e673e7c9983 7 minutes ago Running kube-scheduler 0 0cc01a4584319 kube-scheduler-ubuntu-20-agent
1b21328ae243e 85b7a174738ba 7 minutes ago Running kube-apiserver 0 3da9550e5056a kube-apiserver-ubuntu-20-agent
e23f65eeb6aff a9e7e6b294baf 7 minutes ago Running etcd 0 016f56a70aaee etcd-ubuntu-20-agent
953db0d2f82d9 b6a454c5a800d 7 minutes ago Running kube-controller-manager 0 149fe9b8110db kube-controller-manager-ubuntu-20-agent
==> coredns [634b0f31bf16] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 876af57068f747144f204884e843f6792435faec005aab1f10bd81e6ffca54e010e4374994d8f544c4f6711272ab5662d0892980e63ccc3ba8ba9e3fbcc5e4d9
[INFO] Reloading complete
[INFO] 127.0.0.1:43165 - 33942 "HINFO IN 432949529890596107.8050361272252031817. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.021642899s
[INFO] 10.244.0.24:33042 - 27922 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000389816s
[INFO] 10.244.0.24:42171 - 38582 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000184446s
[INFO] 10.244.0.24:56803 - 17108 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000118517s
[INFO] 10.244.0.24:36839 - 60695 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000170938s
[INFO] 10.244.0.24:48923 - 36870 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128602s
[INFO] 10.244.0.24:43224 - 14793 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000199417s
[INFO] 10.244.0.24:40445 - 11974 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.00413958s
[INFO] 10.244.0.24:38595 - 36532 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.004195152s
[INFO] 10.244.0.24:33576 - 36108 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003551961s
[INFO] 10.244.0.24:44447 - 31922 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004805135s
[INFO] 10.244.0.24:42741 - 32070 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003188282s
[INFO] 10.244.0.24:35696 - 46424 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00369519s
[INFO] 10.244.0.24:40570 - 13844 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 116 0.002311578s
[INFO] 10.244.0.24:45311 - 54943 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 140 0.002645389s
==> describe nodes <==
Name: ubuntu-20-agent
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ubuntu-20-agent
kubernetes.io/os=linux
minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
minikube.k8s.io/name=minikube
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_04_07T12_46_15_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=ubuntu-20-agent
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 07 Apr 2025 12:46:12 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ubuntu-20-agent
AcquireTime: <unset>
RenewTime: Mon, 07 Apr 2025 12:53:54 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 07 Apr 2025 12:51:50 +0000 Mon, 07 Apr 2025 12:46:12 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 07 Apr 2025 12:51:50 +0000 Mon, 07 Apr 2025 12:46:12 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 07 Apr 2025 12:51:50 +0000 Mon, 07 Apr 2025 12:46:12 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 07 Apr 2025 12:51:50 +0000 Mon, 07 Apr 2025 12:46:12 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.132.0.4
Hostname: ubuntu-20-agent
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859372Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859372Ki
pods: 110
System Info:
Machine ID: 591c9f1229383743e2bfc56a050d43d1
System UUID: 591c9f12-2938-3743-e2bf-c56a050d43d1
Boot ID: 32c262e1-f080-4c3c-9cad-9adf7e4991ef
Kernel Version: 5.15.0-1078-gcp
OS Image: Ubuntu 20.04.6 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://28.0.4
Kubelet Version: v1.32.2
Kube-Proxy Version: v1.32.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (24 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default cloud-spanner-emulator-cc9755fc7-8d2gd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m34s
gadget gadget-qfz76 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m34s
gcp-auth gcp-auth-cd9db85c-jmrjf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m6s
kube-system amd-gpu-device-plugin-86df5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m35s
kube-system coredns-668d6bf9bc-28dsp 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 7m35s
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m32s
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m32s
kube-system csi-hostpathplugin-n7jq8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m32s
kube-system etcd-ubuntu-20-agent 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 7m42s
kube-system kube-apiserver-ubuntu-20-agent 250m (3%) 0 (0%) 0 (0%) 0 (0%) 7m40s
kube-system kube-controller-manager-ubuntu-20-agent 200m (2%) 0 (0%) 0 (0%) 0 (0%) 7m40s
kube-system kube-proxy-4ktb9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m37s
kube-system kube-scheduler-ubuntu-20-agent 100m (1%) 0 (0%) 0 (0%) 0 (0%) 7m42s
kube-system metrics-server-7fbb699795-kfmft 100m (1%) 0 (0%) 200Mi (0%) 0 (0%) 7m34s
kube-system nvidia-device-plugin-daemonset-qtjqk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m35s
kube-system registry-6c88467877-kwnrb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m34s
kube-system registry-proxy-gpv45 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m34s
kube-system snapshot-controller-68b874b76f-7465t 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m33s
kube-system snapshot-controller-68b874b76f-bnf6p 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m33s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m34s
volcano-system volcano-admission-75d8f6b5c-pldpl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m33s
volcano-system volcano-controllers-86bdc5c9c-7srdg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m32s
volcano-system volcano-scheduler-75fdd99bcf-kkrdq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m32s
yakd-dashboard yakd-dashboard-575dd5996b-qf5qb 0 (0%) 0 (0%) 128Mi (0%) 256Mi (0%) 7m34s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%) 0 (0%)
memory 498Mi (1%) 426Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 7m33s kube-proxy
Normal Starting 7m47s kubelet Starting kubelet.
Warning CgroupV1 7m47s kubelet cgroup v1 support is in maintenance mode, please migrate to cgroup v2
Normal NodeAllocatableEnforced 7m46s kubelet Updated Node Allocatable limit across pods
Normal NodeHasNoDiskPressure 7m46s (x8 over 7m46s) kubelet Node ubuntu-20-agent status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m46s (x7 over 7m46s) kubelet Node ubuntu-20-agent status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 7m46s (x8 over 7m46s) kubelet Node ubuntu-20-agent status is now: NodeHasSufficientMemory
Normal Starting 7m41s kubelet Starting kubelet.
Warning CgroupV1 7m41s kubelet cgroup v1 support is in maintenance mode, please migrate to cgroup v2
Normal NodeAllocatableEnforced 7m40s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 7m40s kubelet Node ubuntu-20-agent status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m40s kubelet Node ubuntu-20-agent status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m40s kubelet Node ubuntu-20-agent status is now: NodeHasSufficientPID
Normal RegisteredNode 7m37s node-controller Node ubuntu-20-agent event: Registered Node ubuntu-20-agent in Controller
==> dmesg <==
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 83 90 10 44 0e 08 06
[ +9.877557] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 9f 53 98 65 e0 08 06
[ +0.046422] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 6c 53 68 81 1f 08 06
[ +0.061060] IPv4: martian source 10.244.0.1 from 10.244.0.11, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 0a 86 62 be 76 08 06
[ +3.198561] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 81 f4 b0 2d e3 08 06
[Apr 7 12:47] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 2e 41 17 ce 62 b6 08 06
[ +0.558988] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 48 74 4f d6 2f 08 06
[ +0.109195] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 22 a6 01 38 b3 2f 08 06
[ +23.480927] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 4e a2 ba 28 37 08 06
[ +5.548580] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 6e 70 68 84 64 08 06
[ +0.026445] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 8a 42 e0 9b 75 08 06
[ +19.909024] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 36 06 3b 6a b8 08 06
[ +0.000577] IPv4: martian source 10.244.0.24 from 10.244.0.3, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 07 5c 69 9a cd 08 06
==> etcd [e23f65eeb6af] <==
{"level":"info","ts":"2025-04-07T12:46:10.809584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 is starting a new election at term 1"}
{"level":"info","ts":"2025-04-07T12:46:10.809634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 became pre-candidate at term 1"}
{"level":"info","ts":"2025-04-07T12:46:10.809666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 received MsgPreVoteResp from d3d995060bc0a086 at term 1"}
{"level":"info","ts":"2025-04-07T12:46:10.809682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 became candidate at term 2"}
{"level":"info","ts":"2025-04-07T12:46:10.809692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 received MsgVoteResp from d3d995060bc0a086 at term 2"}
{"level":"info","ts":"2025-04-07T12:46:10.809700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 became leader at term 2"}
{"level":"info","ts":"2025-04-07T12:46:10.809709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d3d995060bc0a086 elected leader d3d995060bc0a086 at term 2"}
{"level":"info","ts":"2025-04-07T12:46:10.810586Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"d3d995060bc0a086","local-member-attributes":"{Name:ubuntu-20-agent ClientURLs:[https://10.132.0.4:2379]}","request-path":"/0/members/d3d995060bc0a086/attributes","cluster-id":"36fd114adae62b7a","publish-timeout":"7s"}
{"level":"info","ts":"2025-04-07T12:46:10.810757Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-04-07T12:46:10.810736Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2025-04-07T12:46:10.810857Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2025-04-07T12:46:10.810931Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2025-04-07T12:46:10.810645Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-04-07T12:46:10.811710Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-04-07T12:46:10.811768Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-04-07T12:46:10.811988Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"36fd114adae62b7a","local-member-id":"d3d995060bc0a086","cluster-version":"3.5"}
{"level":"info","ts":"2025-04-07T12:46:10.812087Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2025-04-07T12:46:10.812120Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2025-04-07T12:46:10.812616Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.132.0.4:2379"}
{"level":"info","ts":"2025-04-07T12:46:10.812671Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2025-04-07T12:46:27.716881Z","caller":"traceutil/trace.go:171","msg":"trace[1770517557] linearizableReadLoop","detail":"{readStateIndex:875; appliedIndex:873; }","duration":"121.221478ms","start":"2025-04-07T12:46:27.595638Z","end":"2025-04-07T12:46:27.716859Z","steps":["trace[1770517557] 'read index received' (duration: 58.788992ms)","trace[1770517557] 'applied index is now lower than readState.Index' (duration: 62.431839ms)"],"step_count":2}
{"level":"info","ts":"2025-04-07T12:46:27.717075Z","caller":"traceutil/trace.go:171","msg":"trace[1614856425] transaction","detail":"{read_only:false; response_revision:855; number_of_response:1; }","duration":"123.047449ms","start":"2025-04-07T12:46:27.594011Z","end":"2025-04-07T12:46:27.717058Z","steps":["trace[1614856425] 'process raft request' (duration: 60.314585ms)","trace[1614856425] 'compare' (duration: 62.231306ms)"],"step_count":2}
{"level":"warn","ts":"2025-04-07T12:46:27.717164Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.503421ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gcp-auth\" limit:1 ","response":"range_response_count:1 size:716"}
{"level":"info","ts":"2025-04-07T12:46:27.717216Z","caller":"traceutil/trace.go:171","msg":"trace[1260179640] range","detail":"{range_begin:/registry/namespaces/gcp-auth; range_end:; response_count:1; response_revision:856; }","duration":"121.596627ms","start":"2025-04-07T12:46:27.595610Z","end":"2025-04-07T12:46:27.717207Z","steps":["trace[1260179640] 'agreement among raft nodes before linearized reading' (duration: 121.409977ms)"],"step_count":1}
{"level":"info","ts":"2025-04-07T12:46:27.717377Z","caller":"traceutil/trace.go:171","msg":"trace[2146819799] transaction","detail":"{read_only:false; response_revision:856; number_of_response:1; }","duration":"123.358599ms","start":"2025-04-07T12:46:27.594010Z","end":"2025-04-07T12:46:27.717368Z","steps":["trace[2146819799] 'process raft request' (duration: 122.792111ms)"],"step_count":1}
==> gcp-auth [43612f6a057c] <==
2025/04/07 12:47:53 GCP Auth Webhook started!
==> kernel <==
12:53:55 up 4:36, 0 users, load average: 0.43, 0.81, 1.47
Linux ubuntu-20-agent 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.6 LTS"
==> kube-apiserver [1b21328ae243] <==
E0407 12:46:45.376383 1 controller.go:146] "Unhandled Error" err=<
Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
E0407 12:46:45.376403 1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.94.227:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.94.227:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.94.227:443: connect: connection refused" logger="UnhandledError"
E0407 12:46:45.377915 1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.94.227:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.94.227:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.94.227:443: connect: connection refused" logger="UnhandledError"
I0407 12:46:45.410925 1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
W0407 12:46:48.502032 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.49.30:443: connect: connection refused
E0407 12:46:48.502074 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.49.30:443: connect: connection refused" logger="UnhandledError"
W0407 12:46:48.504222 1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.232.48:443: connect: connection refused
W0407 12:46:58.973672 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.49.30:443: connect: connection refused
E0407 12:46:58.973729 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.49.30:443: connect: connection refused" logger="UnhandledError"
W0407 12:46:58.975524 1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.232.48:443: connect: connection refused
W0407 12:46:58.987339 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.49.30:443: connect: connection refused
E0407 12:46:58.987397 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.49.30:443: connect: connection refused" logger="UnhandledError"
W0407 12:46:58.989094 1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.232.48:443: connect: connection refused
W0407 12:47:08.989853 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.49.30:443: connect: connection refused
E0407 12:47:08.989904 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.49.30:443: connect: connection refused" logger="UnhandledError"
W0407 12:47:08.992543 1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.232.48:443: connect: connection refused
W0407 12:47:30.984716 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.49.30:443: connect: connection refused
E0407 12:47:30.984768 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.49.30:443: connect: connection refused" logger="UnhandledError"
W0407 12:47:30.996198 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.49.30:443: connect: connection refused
E0407 12:47:30.996239 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.49.30:443: connect: connection refused" logger="UnhandledError"
W0407 12:47:49.957919 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.49.30:443: connect: connection refused
E0407 12:47:49.957967 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.49.30:443: connect: connection refused" logger="UnhandledError"
==> kube-controller-manager [953db0d2f82d] <==
I0407 12:47:49.988564 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-cd9db85c" duration="14.740381ms"
I0407 12:47:49.988694 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-cd9db85c" duration="80.775µs"
I0407 12:47:49.996117 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-cd9db85c" duration="66.867µs"
I0407 12:47:51.978096 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-86bdc5c9c" duration="75.82µs"
I0407 12:47:53.562541 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-cd9db85c" duration="6.899009ms"
I0407 12:47:53.562657 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-cd9db85c" duration="64.49µs"
I0407 12:47:58.981977 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-75fdd99bcf" duration="71.07µs"
I0407 12:48:02.979460 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-86bdc5c9c" duration="85.022µs"
I0407 12:48:05.044424 1 job_controller.go:598] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
I0407 12:48:06.029164 1 job_controller.go:598] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
I0407 12:48:11.980313 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-75fdd99bcf" duration="83.377µs"
I0407 12:48:17.455623 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent"
I0407 12:48:30.980787 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-86bdc5c9c" duration="67.784µs"
I0407 12:48:36.979786 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-75fdd99bcf" duration="78.169µs"
I0407 12:48:42.980616 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-86bdc5c9c" duration="71.78µs"
I0407 12:48:50.981161 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-75fdd99bcf" duration="84.638µs"
I0407 12:49:53.978345 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-86bdc5c9c" duration="77.788µs"
I0407 12:50:05.979261 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-86bdc5c9c" duration="70.424µs"
I0407 12:50:06.980686 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-75fdd99bcf" duration="91.043µs"
I0407 12:50:19.979991 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-75fdd99bcf" duration="71.314µs"
I0407 12:51:50.892283 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent"
I0407 12:52:44.980227 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-86bdc5c9c" duration="73.593µs"
I0407 12:52:55.978449 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-75fdd99bcf" duration="73.473µs"
I0407 12:52:56.980203 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-86bdc5c9c" duration="78.283µs"
I0407 12:53:08.980954 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-75fdd99bcf" duration="267.32µs"
==> kube-proxy [e6de974948a2] <==
I0407 12:46:21.832215 1 server_linux.go:66] "Using iptables proxy"
I0407 12:46:22.002444 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["10.132.0.4"]
E0407 12:46:22.002521 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0407 12:46:22.084578 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0407 12:46:22.084642 1 server_linux.go:170] "Using iptables Proxier"
I0407 12:46:22.090930 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0407 12:46:22.091456 1 server.go:497] "Version info" version="v1.32.2"
I0407 12:46:22.091487 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0407 12:46:22.104770 1 config.go:105] "Starting endpoint slice config controller"
I0407 12:46:22.104822 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0407 12:46:22.104856 1 config.go:199] "Starting service config controller"
I0407 12:46:22.104861 1 shared_informer.go:313] Waiting for caches to sync for service config
I0407 12:46:22.105247 1 config.go:329] "Starting node config controller"
I0407 12:46:22.105262 1 shared_informer.go:313] Waiting for caches to sync for node config
I0407 12:46:22.207396 1 shared_informer.go:320] Caches are synced for service config
I0407 12:46:22.207478 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0407 12:46:22.211383 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [8e962b9f0917] <==
W0407 12:46:12.396702 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0407 12:46:12.396720 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0407 12:46:13.243091 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0407 12:46:13.243142 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0407 12:46:13.305117 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0407 12:46:13.305161 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0407 12:46:13.312894 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0407 12:46:13.312941 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0407 12:46:13.314239 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0407 12:46:13.314279 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0407 12:46:13.357817 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0407 12:46:13.357865 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0407 12:46:13.450908 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0407 12:46:13.450956 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0407 12:46:13.517730 1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0407 12:46:13.517783 1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0407 12:46:13.524289 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0407 12:46:13.524338 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0407 12:46:13.554955 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0407 12:46:13.554999 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0407 12:46:13.556960 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0407 12:46:13.556999 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0407 12:46:13.658851 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0407 12:46:13.658899 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
I0407 12:46:15.991211 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
-- Logs begin at Fri 2025-02-07 00:17:37 UTC, end at Mon 2025-04-07 12:53:55 UTC. --
Apr 07 12:51:52 ubuntu-20-agent kubelet[1430849]: E0407 12:51:52.969124 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-86bdc5c9c-7srdg" podUID="cd2b3c58-47c5-46f8-ba36-579e70ff12c3"
Apr 07 12:51:59 ubuntu-20-agent kubelet[1430849]: E0407 12:51:59.969293 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-75fdd99bcf-kkrdq" podUID="eca17150-2673-4431-a0cc-079a7c574525"
Apr 07 12:52:05 ubuntu-20-agent kubelet[1430849]: E0407 12:52:05.968601 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-86bdc5c9c-7srdg" podUID="cd2b3c58-47c5-46f8-ba36-579e70ff12c3"
Apr 07 12:52:14 ubuntu-20-agent kubelet[1430849]: E0407 12:52:14.969591 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-75fdd99bcf-kkrdq" podUID="eca17150-2673-4431-a0cc-079a7c574525"
Apr 07 12:52:16 ubuntu-20-agent kubelet[1430849]: E0407 12:52:16.969385 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-86bdc5c9c-7srdg" podUID="cd2b3c58-47c5-46f8-ba36-579e70ff12c3"
Apr 07 12:52:26 ubuntu-20-agent kubelet[1430849]: E0407 12:52:26.969063 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-75fdd99bcf-kkrdq" podUID="eca17150-2673-4431-a0cc-079a7c574525"
Apr 07 12:52:33 ubuntu-20-agent kubelet[1430849]: E0407 12:52:33.047650 1430849 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3"
Apr 07 12:52:33 ubuntu-20-agent kubelet[1430849]: E0407 12:52:33.047726 1430849 kuberuntime_image.go:55] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3"
Apr 07 12:52:33 ubuntu-20-agent kubelet[1430849]: E0407 12:52:33.047876 1430849 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:volcano-controllers,Image:docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3,Command:[],Args:[--logtostderr --enable-healthz=true --enable-metrics=true --leader-elect=false --kube-api-qps=50 --kube-api-burst=100 --worker-threads=3 --worker-threads-for-gc=5 --worker-threads-for-podgroup=5 -v=4 2>&1],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mp8x5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,Security
Context:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod volcano-controllers-86bdc5c9c-7srdg_volcano-system(cd2b3c58-47c5-46f8-ba36-579e70ff12c3): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
Apr 07 12:52:33 ubuntu-20-agent kubelet[1430849]: E0407 12:52:33.049056 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-86bdc5c9c-7srdg" podUID="cd2b3c58-47c5-46f8-ba36-579e70ff12c3"
Apr 07 12:52:43 ubuntu-20-agent kubelet[1430849]: E0407 12:52:43.043523 1430849 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86"
Apr 07 12:52:43 ubuntu-20-agent kubelet[1430849]: E0407 12:52:43.043586 1430849 kuberuntime_image.go:55] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86"
Apr 07 12:52:43 ubuntu-20-agent kubelet[1430849]: E0407 12:52:43.043701 1430849 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:volcano-scheduler,Image:docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86,Command:[],Args:[--logtostderr --scheduler-conf=/volcano.scheduler/volcano-scheduler.conf --enable-healthz=true --enable-metrics=true --leader-elect=false --kube-api-qps=2000 --kube-api-burst=2000 --schedule-period=1s --node-worker-threads=20 -v=3 2>&1],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DEBUG_SOCKET_DIR,Value:/tmp/klog-socks,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scheduler-config,ReadOnly:false,MountPath:/volcano.scheduler,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:klog-sock,ReadOnly:false,MountPath:/tmp/klog-socks,SubP
ath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbqtk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod volcano-scheduler-75fdd99bcf-kkrdq_volcano-system(eca17150-2673-4431-a0cc-079a7c574525): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
Apr 07 12:52:43 ubuntu-20-agent kubelet[1430849]: E0407 12:52:43.044955 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-75fdd99bcf-kkrdq" podUID="eca17150-2673-4431-a0cc-079a7c574525"
Apr 07 12:52:44 ubuntu-20-agent kubelet[1430849]: E0407 12:52:44.969445 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-86bdc5c9c-7srdg" podUID="cd2b3c58-47c5-46f8-ba36-579e70ff12c3"
Apr 07 12:52:55 ubuntu-20-agent kubelet[1430849]: E0407 12:52:55.968467 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-75fdd99bcf-kkrdq" podUID="eca17150-2673-4431-a0cc-079a7c574525"
Apr 07 12:52:56 ubuntu-20-agent kubelet[1430849]: E0407 12:52:56.968305 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-86bdc5c9c-7srdg" podUID="cd2b3c58-47c5-46f8-ba36-579e70ff12c3"
Apr 07 12:53:08 ubuntu-20-agent kubelet[1430849]: E0407 12:53:08.969002 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-75fdd99bcf-kkrdq" podUID="eca17150-2673-4431-a0cc-079a7c574525"
Apr 07 12:53:11 ubuntu-20-agent kubelet[1430849]: E0407 12:53:11.968602 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-86bdc5c9c-7srdg" podUID="cd2b3c58-47c5-46f8-ba36-579e70ff12c3"
Apr 07 12:53:19 ubuntu-20-agent kubelet[1430849]: E0407 12:53:19.968727 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-75fdd99bcf-kkrdq" podUID="eca17150-2673-4431-a0cc-079a7c574525"
Apr 07 12:53:24 ubuntu-20-agent kubelet[1430849]: E0407 12:53:24.970354 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-86bdc5c9c-7srdg" podUID="cd2b3c58-47c5-46f8-ba36-579e70ff12c3"
Apr 07 12:53:31 ubuntu-20-agent kubelet[1430849]: E0407 12:53:31.969238 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-75fdd99bcf-kkrdq" podUID="eca17150-2673-4431-a0cc-079a7c574525"
Apr 07 12:53:38 ubuntu-20-agent kubelet[1430849]: E0407 12:53:38.978696 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-86bdc5c9c-7srdg" podUID="cd2b3c58-47c5-46f8-ba36-579e70ff12c3"
Apr 07 12:53:44 ubuntu-20-agent kubelet[1430849]: E0407 12:53:44.969172 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-75fdd99bcf-kkrdq" podUID="eca17150-2673-4431-a0cc-079a7c574525"
Apr 07 12:53:51 ubuntu-20-agent kubelet[1430849]: E0407 12:53:51.968448 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-86bdc5c9c-7srdg" podUID="cd2b3c58-47c5-46f8-ba36-579e70ff12c3"
==> storage-provisioner [9367d6480bcd] <==
I0407 12:46:22.691767 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0407 12:46:22.700993 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0407 12:46:22.701760 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0407 12:46:22.709645 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0407 12:46:22.709901 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent_d2ced8c7-5bce-4be8-ab28-23171422388c!
I0407 12:46:22.710495 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"088063e6-27ee-4b45-98d2-8cc5af467fa3", APIVersion:"v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent_d2ced8c7-5bce-4be8-ab28-23171422388c became leader
I0407 12:46:22.810972 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent_d2ced8c7-5bce-4be8-ab28-23171422388c!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: volcano-admission-init-4bqwh volcano-controllers-86bdc5c9c-7srdg volcano-scheduler-75fdd99bcf-kkrdq
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context minikube describe pod volcano-admission-init-4bqwh volcano-controllers-86bdc5c9c-7srdg volcano-scheduler-75fdd99bcf-kkrdq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context minikube describe pod volcano-admission-init-4bqwh volcano-controllers-86bdc5c9c-7srdg volcano-scheduler-75fdd99bcf-kkrdq: exit status 1 (63.576092ms)
** stderr **
Error from server (NotFound): pods "volcano-admission-init-4bqwh" not found
Error from server (NotFound): pods "volcano-controllers-86bdc5c9c-7srdg" not found
Error from server (NotFound): pods "volcano-scheduler-75fdd99bcf-kkrdq" not found
** /stderr **
helpers_test.go:279: kubectl --context minikube describe pod volcano-admission-init-4bqwh volcano-controllers-86bdc5c9c-7srdg volcano-scheduler-75fdd99bcf-kkrdq: exit status 1
addons_test.go:992: (dbg) Run: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.935602828s)
--- FAIL: TestAddons/serial/Volcano (372.74s)