=== RUN TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 10.636758ms
addons_test.go:807: volcano-scheduler stabilized in 10.654352ms
addons_test.go:815: volcano-admission stabilized in 10.8629ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-q7mcw" [33f5e98f-fb04-4f70-b72c-d223e4812765] Pending / Ready:ContainersNotReady (containers with unready status: [volcano-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [volcano-scheduler])
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "volcano-system" "app=volcano-scheduler" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:829: ***** TestAddons/serial/Volcano: pod "app=volcano-scheduler" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:829: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
addons_test.go:829: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-12-05 18:54:12.469023968 +0000 UTC m=+507.085004487
addons_test.go:829: (dbg) Run: kubectl --context minikube describe po volcano-scheduler-6c9778cbdf-q7mcw -n volcano-system
addons_test.go:829: (dbg) kubectl --context minikube describe po volcano-scheduler-6c9778cbdf-q7mcw -n volcano-system:
Name: volcano-scheduler-6c9778cbdf-q7mcw
Namespace: volcano-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Service Account: volcano-scheduler
Node: ubuntu-20-agent-15/10.128.15.240
Start Time: Thu, 05 Dec 2024 18:46:52 +0000
Labels: app=volcano-scheduler
pod-template-hash=6c9778cbdf
Annotations: <none>
Status: Pending
IP: 10.244.0.17
IPs:
IP: 10.244.0.17
Controlled By: ReplicaSet/volcano-scheduler-6c9778cbdf
Containers:
volcano-scheduler:
Container ID:
Image: docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882
Image ID:
Port: <none>
Host Port: <none>
Args:
--logtostderr
--scheduler-conf=/volcano.scheduler/volcano-scheduler.conf
--enable-healthz=true
--enable-metrics=true
--leader-elect=false
-v=3
2>&1
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
DEBUG_SOCKET_DIR: /tmp/klog-socks
Mounts:
/tmp/klog-socks from klog-sock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4bz59 (ro)
/volcano.scheduler from scheduler-config (rw)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
scheduler-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: volcano-scheduler-configmap
Optional: false
klog-sock:
Type: HostPath (bare host directory volume)
Path: /tmp/klog-socks
HostPathType:
kube-api-access-4bz59:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m20s default-scheduler Successfully assigned volcano-system/volcano-scheduler-6c9778cbdf-q7mcw to ubuntu-20-agent-15
Warning Failed 6m41s kubelet Failed to pull image "docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882": toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal Pulling 5m35s (x4 over 7m20s) kubelet Pulling image "docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
Warning Failed 5m34s (x3 over 6m55s) kubelet Failed to pull image "docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882": Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning Failed 5m34s (x4 over 6m55s) kubelet Error: ErrImagePull
Warning Failed 5m8s (x6 over 6m54s) kubelet Error: ImagePullBackOff
Normal BackOff 2m17s (x18 over 6m54s) kubelet Back-off pulling image "docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
addons_test.go:829: (dbg) Run: kubectl --context minikube logs volcano-scheduler-6c9778cbdf-q7mcw -n volcano-system
addons_test.go:829: (dbg) Non-zero exit: kubectl --context minikube logs volcano-scheduler-6c9778cbdf-q7mcw -n volcano-system: exit status 1 (79.80895ms)
** stderr **
Error from server (BadRequest): container "volcano-scheduler" in pod "volcano-scheduler-6c9778cbdf-q7mcw" is waiting to start: trying and failing to pull image
** /stderr **
addons_test.go:829: kubectl --context minikube logs volcano-scheduler-6c9778cbdf-q7mcw -n volcano-system: exit status 1
addons_test.go:830: failed waiting for app=volcano-scheduler pod: app=volcano-scheduler within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p minikube logs -n 25: (1.137495669s)
helpers_test.go:252: TestAddons/serial/Volcano logs:
-- stdout --
==> Audit <==
|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC | 05 Dec 24 18:45 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC | 05 Dec 24 18:45 UTC |
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.2 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC | 05 Dec 24 18:45 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC | 05 Dec 24 18:45 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC | 05 Dec 24 18:45 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC | 05 Dec 24 18:45 UTC |
| start | --download-only -p | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC | |
| | minikube --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:36049 | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC | 05 Dec 24 18:45 UTC |
| start | -p minikube --alsologtostderr | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC | 05 Dec 24 18:46 UTC |
| | -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 05 Dec 24 18:46 UTC | 05 Dec 24 18:46 UTC |
| addons | enable dashboard -p minikube | minikube | jenkins | v1.34.0 | 05 Dec 24 18:46 UTC | |
| addons | disable dashboard -p minikube | minikube | jenkins | v1.34.0 | 05 Dec 24 18:46 UTC | |
| start | -p minikube --wait=true | minikube | jenkins | v1.34.0 | 05 Dec 24 18:46 UTC | 05 Dec 24 18:48 UTC |
| | --memory=4000 | | | | | |
| | --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --addons=amd-gpu-device-plugin | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/12/05 18:46:30
Running on machine: ubuntu-20-agent-15
Binary: Built with gc go1.23.2 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1205 18:46:30.769916 392706 out.go:345] Setting OutFile to fd 1 ...
I1205 18:46:30.770042 392706 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 18:46:30.770054 392706 out.go:358] Setting ErrFile to fd 2...
I1205 18:46:30.770059 392706 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 18:46:30.770279 392706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-381606/.minikube/bin
I1205 18:46:30.771080 392706 out.go:352] Setting JSON to false
I1205 18:46:30.772086 392706 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5340,"bootTime":1733419051,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1205 18:46:30.772272 392706 start.go:139] virtualization: kvm guest
I1205 18:46:30.774739 392706 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
I1205 18:46:30.776411 392706 out.go:177] - MINIKUBE_LOCATION=20052
I1205 18:46:30.776473 392706 notify.go:220] Checking for updates...
W1205 18:46:30.776400 392706 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20052-381606/.minikube/cache/preloaded-tarball: no such file or directory
I1205 18:46:30.779362 392706 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1205 18:46:30.780804 392706 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20052-381606/kubeconfig
I1205 18:46:30.782296 392706 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-381606/.minikube
I1205 18:46:30.783681 392706 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1205 18:46:30.784965 392706 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1205 18:46:30.786416 392706 driver.go:394] Setting default libvirt URI to qemu:///system
I1205 18:46:30.797207 392706 out.go:177] * Using the none driver based on user configuration
I1205 18:46:30.798748 392706 start.go:297] selected driver: none
I1205 18:46:30.798772 392706 start.go:901] validating driver "none" against <nil>
I1205 18:46:30.798787 392706 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1205 18:46:30.798827 392706 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
W1205 18:46:30.799140 392706 out.go:270] ! The 'none' driver does not respect the --memory flag
I1205 18:46:30.799692 392706 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I1205 18:46:30.799965 392706 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1205 18:46:30.799998 392706 cni.go:84] Creating CNI manager for ""
I1205 18:46:30.800154 392706 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1205 18:46:30.800167 392706 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1205 18:46:30.800242 392706 start.go:340] cluster config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1205 18:46:30.801784 392706 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
I1205 18:46:30.803412 392706 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/config.json ...
I1205 18:46:30.803448 392706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/config.json: {Name:mk77089bbcdd696d611f941aa97c12acab7ba119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 18:46:30.803597 392706 start.go:360] acquireMachinesLock for minikube: {Name:mk65d6052f343498845971aaee546d269ff2c3cc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1205 18:46:30.803637 392706 start.go:364] duration metric: took 22.313µs to acquireMachinesLock for "minikube"
I1205 18:46:30.803658 392706 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I1205 18:46:30.803729 392706 start.go:125] createHost starting for "" (driver="none")
I1205 18:46:30.805465 392706 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
I1205 18:46:30.806776 392706 exec_runner.go:51] Run: systemctl --version
I1205 18:46:30.809409 392706 start.go:159] libmachine.API.Create for "minikube" (driver="none")
I1205 18:46:30.809439 392706 client.go:168] LocalClient.Create starting
I1205 18:46:30.809520 392706 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-381606/.minikube/certs/ca.pem
I1205 18:46:30.809558 392706 main.go:141] libmachine: Decoding PEM data...
I1205 18:46:30.809584 392706 main.go:141] libmachine: Parsing certificate...
I1205 18:46:30.809640 392706 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-381606/.minikube/certs/cert.pem
I1205 18:46:30.809665 392706 main.go:141] libmachine: Decoding PEM data...
I1205 18:46:30.809691 392706 main.go:141] libmachine: Parsing certificate...
I1205 18:46:30.810115 392706 client.go:171] duration metric: took 667.599µs to LocalClient.Create
I1205 18:46:30.810143 392706 start.go:167] duration metric: took 736.433µs to libmachine.API.Create "minikube"
I1205 18:46:30.810155 392706 start.go:293] postStartSetup for "minikube" (driver="none")
I1205 18:46:30.810203 392706 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1205 18:46:30.810256 392706 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1205 18:46:30.820595 392706 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1205 18:46:30.820618 392706 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1205 18:46:30.820626 392706 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1205 18:46:30.822589 392706 out.go:177] * OS release is Ubuntu 20.04.6 LTS
I1205 18:46:30.823891 392706 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-381606/.minikube/addons for local assets ...
I1205 18:46:30.823936 392706 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-381606/.minikube/files for local assets ...
I1205 18:46:30.823957 392706 start.go:296] duration metric: took 13.796067ms for postStartSetup
I1205 18:46:30.824582 392706 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/config.json ...
I1205 18:46:30.824717 392706 start.go:128] duration metric: took 20.970392ms to createHost
I1205 18:46:30.824732 392706 start.go:83] releasing machines lock for "minikube", held for 21.082813ms
I1205 18:46:30.825166 392706 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1205 18:46:30.825248 392706 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
W1205 18:46:30.827158 392706 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1205 18:46:30.827217 392706 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1205 18:46:30.838011 392706 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1205 18:46:30.838043 392706 start.go:495] detecting cgroup driver to use...
I1205 18:46:30.838088 392706 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1205 18:46:30.838224 392706 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1205 18:46:30.859469 392706 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I1205 18:46:30.870837 392706 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1205 18:46:30.881521 392706 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1205 18:46:30.881608 392706 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1205 18:46:30.891718 392706 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1205 18:46:30.904467 392706 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1205 18:46:30.915222 392706 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1205 18:46:30.925150 392706 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1205 18:46:30.934003 392706 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1205 18:46:30.943119 392706 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1205 18:46:30.953272 392706 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1205 18:46:30.964278 392706 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1205 18:46:30.973483 392706 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1205 18:46:30.981164 392706 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1205 18:46:31.202016 392706 exec_runner.go:51] Run: sudo systemctl restart containerd
I1205 18:46:31.270033 392706 start.go:495] detecting cgroup driver to use...
I1205 18:46:31.270084 392706 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1205 18:46:31.270202 392706 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1205 18:46:31.292954 392706 exec_runner.go:51] Run: which cri-dockerd
I1205 18:46:31.294048 392706 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1205 18:46:31.303897 392706 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
I1205 18:46:31.303933 392706 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
I1205 18:46:31.303982 392706 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
I1205 18:46:31.312945 392706 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I1205 18:46:31.313105 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3548394414 /etc/systemd/system/cri-docker.service.d/10-cni.conf
I1205 18:46:31.321553 392706 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I1205 18:46:31.539653 392706 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I1205 18:46:31.773626 392706 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I1205 18:46:31.773803 392706 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
I1205 18:46:31.773820 392706 exec_runner.go:203] rm: /etc/docker/daemon.json
I1205 18:46:31.773861 392706 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
I1205 18:46:31.782777 392706 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
I1205 18:46:31.782930 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1139489101 /etc/docker/daemon.json
I1205 18:46:31.792259 392706 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1205 18:46:32.033938 392706 exec_runner.go:51] Run: sudo systemctl restart docker
I1205 18:46:32.372403 392706 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
I1205 18:46:32.385156 392706 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
I1205 18:46:32.404724 392706 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I1205 18:46:32.419377 392706 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
I1205 18:46:32.653485 392706 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I1205 18:46:32.890583 392706 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1205 18:46:33.122075 392706 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
I1205 18:46:33.137270 392706 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I1205 18:46:33.150523 392706 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1205 18:46:33.387376 392706 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
I1205 18:46:33.463329 392706 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1205 18:46:33.463431 392706 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I1205 18:46:33.465011 392706 start.go:563] Will wait 60s for crictl version
I1205 18:46:33.465055 392706 exec_runner.go:51] Run: which crictl
I1205 18:46:33.466002 392706 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
I1205 18:46:33.500298 392706 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.3.1
RuntimeApiVersion: v1
I1205 18:46:33.500406 392706 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I1205 18:46:33.523254 392706 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I1205 18:46:33.549111 392706 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
I1205 18:46:33.549216 392706 exec_runner.go:51] Run: grep 127.0.0.1 host.minikube.internal$ /etc/hosts
I1205 18:46:33.552598 392706 out.go:177] - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
I1205 18:46:33.553992 392706 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.240 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1205 18:46:33.554162 392706 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1205 18:46:33.554173 392706 kubeadm.go:934] updating node { 10.128.15.240 8443 v1.31.2 docker true true} ...
I1205 18:46:33.554279 392706 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-15 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.128.15.240 --resolv-conf=/run/systemd/resolve/resolv.conf
[Install]
config:
{KubernetesVersion:v1.31.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
I1205 18:46:33.554340 392706 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
I1205 18:46:33.605060 392706 cni.go:84] Creating CNI manager for ""
I1205 18:46:33.605089 392706 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1205 18:46:33.605106 392706 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1205 18:46:33.605131 392706 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.128.15.240 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-15 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.128.15.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.128.15.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1205 18:46:33.605274 392706 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.128.15.240
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ubuntu-20-agent-15"
kubeletExtraArgs:
- name: "node-ip"
value: "10.128.15.240"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "10.128.15.240"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.31.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1205 18:46:33.605344 392706 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
I1205 18:46:33.614930 392706 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: exit status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
Initiating transfer...
I1205 18:46:33.614987 392706 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
I1205 18:46:33.625021 392706 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
I1205 18:46:33.625038 392706 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
I1205 18:46:33.625077 392706 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I1205 18:46:33.625084 392706 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
I1205 18:46:33.625119 392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
I1205 18:46:33.625132 392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
I1205 18:46:33.637463 392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
I1205 18:46:33.676053 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2709502434 /var/lib/minikube/binaries/v1.31.2/kubectl
I1205 18:46:33.681993 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3191562976 /var/lib/minikube/binaries/v1.31.2/kubeadm
I1205 18:46:33.706012 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3852283944 /var/lib/minikube/binaries/v1.31.2/kubelet
I1205 18:46:33.774226 392706 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1205 18:46:33.783978 392706 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
I1205 18:46:33.784031 392706 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I1205 18:46:33.784072 392706 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I1205 18:46:33.794212 392706 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
I1205 18:46:33.794832 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3615464511 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I1205 18:46:33.806248 392706 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
I1205 18:46:33.806272 392706 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
I1205 18:46:33.806315 392706 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
I1205 18:46:33.814811 392706 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1205 18:46:33.815017 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1960753144 /lib/systemd/system/kubelet.service
I1205 18:46:33.824391 392706 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2299 bytes)
I1205 18:46:33.824567 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2327795690 /var/tmp/minikube/kubeadm.yaml.new
I1205 18:46:33.834289 392706 exec_runner.go:51] Run: grep 10.128.15.240 control-plane.minikube.internal$ /etc/hosts
I1205 18:46:33.835736 392706 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1205 18:46:34.053075 392706 exec_runner.go:51] Run: sudo systemctl start kubelet
I1205 18:46:34.068755 392706 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube for IP: 10.128.15.240
I1205 18:46:34.068778 392706 certs.go:194] generating shared ca certs ...
I1205 18:46:34.068803 392706 certs.go:226] acquiring lock for ca certs: {Name:mk9c2572d767bddb7155b721ed33333cb21d53bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 18:46:34.068988 392706 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-381606/.minikube/ca.key
I1205 18:46:34.069041 392706 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-381606/.minikube/proxy-client-ca.key
I1205 18:46:34.069052 392706 certs.go:256] generating profile certs ...
I1205 18:46:34.069124 392706 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/client.key
I1205 18:46:34.069143 392706 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/client.crt with IP's: []
I1205 18:46:34.341279 392706 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/client.crt ...
I1205 18:46:34.341316 392706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/client.crt: {Name:mk08c5e544f65da5094f7bd202bf374884568ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 18:46:34.341476 392706 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/client.key ...
I1205 18:46:34.341489 392706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/client.key: {Name:mkcb634696cd1738afddbc3bec63dcd527f9beaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 18:46:34.341554 392706 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.key.271ff23d
I1205 18:46:34.341568 392706 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.crt.271ff23d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.128.15.240]
I1205 18:46:34.424022 392706 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.crt.271ff23d ...
I1205 18:46:34.424058 392706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.crt.271ff23d: {Name:mkbc27c587d344d6ba9d2761951e0622a5123980 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 18:46:34.424201 392706 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.key.271ff23d ...
I1205 18:46:34.424213 392706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.key.271ff23d: {Name:mkfadbb838d7f9bbe16a3192eff07dfb0b6fc080 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 18:46:34.424269 392706 certs.go:381] copying /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.crt.271ff23d -> /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.crt
I1205 18:46:34.424365 392706 certs.go:385] copying /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.key.271ff23d -> /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.key
I1205 18:46:34.424430 392706 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/proxy-client.key
I1205 18:46:34.424445 392706 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I1205 18:46:34.554472 392706 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/proxy-client.crt ...
I1205 18:46:34.554508 392706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/proxy-client.crt: {Name:mk31608fded1fe7ee0c5fdee7eb3e4fb9debe10d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 18:46:34.554642 392706 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/proxy-client.key ...
I1205 18:46:34.554653 392706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/proxy-client.key: {Name:mkfd8193014e2c724ee548fc6504a8972edc6a53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 18:46:34.554810 392706 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-381606/.minikube/certs/ca-key.pem (1679 bytes)
I1205 18:46:34.554844 392706 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-381606/.minikube/certs/ca.pem (1082 bytes)
I1205 18:46:34.554868 392706 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-381606/.minikube/certs/cert.pem (1123 bytes)
I1205 18:46:34.554890 392706 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-381606/.minikube/certs/key.pem (1675 bytes)
I1205 18:46:34.555657 392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1205 18:46:34.555795 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1931116226 /var/lib/minikube/certs/ca.crt
I1205 18:46:34.566283 392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1205 18:46:34.566423 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3311133801 /var/lib/minikube/certs/ca.key
I1205 18:46:34.576192 392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1205 18:46:34.576329 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube621261124 /var/lib/minikube/certs/proxy-client-ca.crt
I1205 18:46:34.584829 392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1205 18:46:34.585049 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1662520478 /var/lib/minikube/certs/proxy-client-ca.key
I1205 18:46:34.594239 392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
I1205 18:46:34.594365 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3655454639 /var/lib/minikube/certs/apiserver.crt
I1205 18:46:34.602741 392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1205 18:46:34.602905 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4229425421 /var/lib/minikube/certs/apiserver.key
I1205 18:46:34.611536 392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1205 18:46:34.611703 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube636842749 /var/lib/minikube/certs/proxy-client.crt
I1205 18:46:34.620139 392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1205 18:46:34.620303 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2921387457 /var/lib/minikube/certs/proxy-client.key
I1205 18:46:34.630051 392706 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
I1205 18:46:34.630084 392706 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
I1205 18:46:34.630136 392706 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
I1205 18:46:34.638522 392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1205 18:46:34.638699 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2521431643 /usr/share/ca-certificates/minikubeCA.pem
I1205 18:46:34.647731 392706 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1205 18:46:34.647873 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2100674310 /var/lib/minikube/kubeconfig
I1205 18:46:34.658040 392706 exec_runner.go:51] Run: openssl version
I1205 18:46:34.662113 392706 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1205 18:46:34.672036 392706 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1205 18:46:34.673748 392706 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Dec 5 18:46 /usr/share/ca-certificates/minikubeCA.pem
I1205 18:46:34.673808 392706 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1205 18:46:34.676832 392706 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1205 18:46:34.686061 392706 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1205 18:46:34.687375 392706 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1205 18:46:34.687420 392706 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.240 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1205 18:46:34.687565 392706 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1205 18:46:34.705382 392706 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1205 18:46:34.714798 392706 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1205 18:46:34.724606 392706 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I1205 18:46:34.747922 392706 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1205 18:46:34.757968 392706 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1205 18:46:34.757992 392706 kubeadm.go:157] found existing configuration files:
I1205 18:46:34.758038 392706 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1205 18:46:34.766673 392706 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1205 18:46:34.766744 392706 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
I1205 18:46:34.776134 392706 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1205 18:46:34.786200 392706 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1205 18:46:34.786263 392706 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1205 18:46:34.794269 392706 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1205 18:46:34.803428 392706 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1205 18:46:34.803504 392706 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1205 18:46:34.811344 392706 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1205 18:46:34.820187 392706 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1205 18:46:34.820247 392706 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1205 18:46:34.828194 392706 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1205 18:46:34.870510 392706 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
I1205 18:46:34.870546 392706 kubeadm.go:310] [preflight] Running pre-flight checks
I1205 18:46:34.964502 392706 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1205 18:46:34.964656 392706 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I1205 18:46:34.964684 392706 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1205 18:46:34.964694 392706 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1205 18:46:34.975934 392706 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1205 18:46:34.978913 392706 out.go:235] - Generating certificates and keys ...
I1205 18:46:34.978965 392706 kubeadm.go:310] [certs] Using existing ca certificate authority
I1205 18:46:34.978977 392706 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I1205 18:46:35.253026 392706 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I1205 18:46:35.562792 392706 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I1205 18:46:35.631580 392706 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I1205 18:46:35.716662 392706 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I1205 18:46:35.898010 392706 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I1205 18:46:35.898073 392706 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-15] and IPs [10.128.15.240 127.0.0.1 ::1]
I1205 18:46:35.949614 392706 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I1205 18:46:35.949666 392706 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-15] and IPs [10.128.15.240 127.0.0.1 ::1]
I1205 18:46:36.038237 392706 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I1205 18:46:36.164527 392706 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I1205 18:46:36.290658 392706 kubeadm.go:310] [certs] Generating "sa" key and public key
I1205 18:46:36.290773 392706 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1205 18:46:36.508311 392706 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I1205 18:46:36.849132 392706 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1205 18:46:37.123451 392706 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1205 18:46:37.332695 392706 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1205 18:46:37.541415 392706 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1205 18:46:37.541904 392706 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1205 18:46:37.544212 392706 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1205 18:46:37.546813 392706 out.go:235] - Booting up control plane ...
I1205 18:46:37.546853 392706 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1205 18:46:37.546881 392706 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1205 18:46:37.546889 392706 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1205 18:46:37.563899 392706 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1205 18:46:37.570039 392706 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1205 18:46:37.570090 392706 kubeadm.go:310] [kubelet-start] Starting the kubelet
I1205 18:46:37.808962 392706 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1205 18:46:37.808989 392706 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1205 18:46:38.310635 392706 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.662223ms
I1205 18:46:38.310664 392706 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I1205 18:46:42.812379 392706 kubeadm.go:310] [api-check] The API server is healthy after 4.50172s
I1205 18:46:42.824082 392706 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1205 18:46:42.836756 392706 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1205 18:46:42.856422 392706 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I1205 18:46:42.856470 392706 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-15 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1205 18:46:42.865099 392706 kubeadm.go:310] [bootstrap-token] Using token: iryjon.gzw4zhozj14dvsi7
I1205 18:46:42.866537 392706 out.go:235] - Configuring RBAC rules ...
I1205 18:46:42.866575 392706 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1205 18:46:42.870316 392706 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1205 18:46:42.876039 392706 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1205 18:46:42.880393 392706 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1205 18:46:42.883210 392706 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1205 18:46:42.887019 392706 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1205 18:46:43.218073 392706 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1205 18:46:43.641108 392706 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I1205 18:46:44.219410 392706 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I1205 18:46:44.220430 392706 kubeadm.go:310]
I1205 18:46:44.220448 392706 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I1205 18:46:44.220453 392706 kubeadm.go:310]
I1205 18:46:44.220457 392706 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I1205 18:46:44.220461 392706 kubeadm.go:310]
I1205 18:46:44.220465 392706 kubeadm.go:310] mkdir -p $HOME/.kube
I1205 18:46:44.220469 392706 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1205 18:46:44.220472 392706 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1205 18:46:44.220476 392706 kubeadm.go:310]
I1205 18:46:44.220480 392706 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I1205 18:46:44.220483 392706 kubeadm.go:310]
I1205 18:46:44.220487 392706 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I1205 18:46:44.220490 392706 kubeadm.go:310]
I1205 18:46:44.220493 392706 kubeadm.go:310] You should now deploy a pod network to the cluster.
I1205 18:46:44.220496 392706 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1205 18:46:44.220500 392706 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1205 18:46:44.220503 392706 kubeadm.go:310]
I1205 18:46:44.220507 392706 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I1205 18:46:44.220511 392706 kubeadm.go:310] and service account keys on each node and then running the following as root:
I1205 18:46:44.220515 392706 kubeadm.go:310]
I1205 18:46:44.220518 392706 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token iryjon.gzw4zhozj14dvsi7 \
I1205 18:46:44.220523 392706 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:dbe841b1e28f4a104101b2a84f1789a91b89b2acf49afcea7c16961b03ff18e5 \
I1205 18:46:44.220527 392706 kubeadm.go:310] --control-plane
I1205 18:46:44.220531 392706 kubeadm.go:310]
I1205 18:46:44.220535 392706 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I1205 18:46:44.220540 392706 kubeadm.go:310]
I1205 18:46:44.220543 392706 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token iryjon.gzw4zhozj14dvsi7 \
I1205 18:46:44.220547 392706 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:dbe841b1e28f4a104101b2a84f1789a91b89b2acf49afcea7c16961b03ff18e5
I1205 18:46:44.223810 392706 cni.go:84] Creating CNI manager for ""
I1205 18:46:44.223839 392706 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1205 18:46:44.225789 392706 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I1205 18:46:44.227153 392706 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
I1205 18:46:44.239639 392706 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1205 18:46:44.239910 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1866476991 /etc/cni/net.d/1-k8s.conflist
I1205 18:46:44.252194 392706 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1205 18:46:44.252285 392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1205 18:46:44.252332 392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-15 minikube.k8s.io/updated_at=2024_12_05T18_46_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
I1205 18:46:44.262941 392706 ops.go:34] apiserver oom_adj: -16
I1205 18:46:44.336837 392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1205 18:46:44.837016 392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1205 18:46:45.336957 392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1205 18:46:45.837673 392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1205 18:46:46.337692 392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1205 18:46:46.837740 392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1205 18:46:47.337022 392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1205 18:46:47.837502 392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1205 18:46:48.337499 392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1205 18:46:48.837044 392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1205 18:46:48.905831 392706 kubeadm.go:1113] duration metric: took 4.65361861s to wait for elevateKubeSystemPrivileges
I1205 18:46:48.905876 392706 kubeadm.go:394] duration metric: took 14.218460262s to StartCluster
I1205 18:46:48.905903 392706 settings.go:142] acquiring lock: {Name:mkdc0d6b86a842b5cd5a6cd70ea78a4ffd7cbb13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 18:46:48.906005 392706 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20052-381606/kubeconfig
I1205 18:46:48.906883 392706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-381606/kubeconfig: {Name:mk94906aabd0acbaafc4c687aa549eead9ea1dce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 18:46:48.907140 392706 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1205 18:46:48.907193 392706 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:true volumesnapshots:true yakd:true]
I1205 18:46:48.907342 392706 addons.go:69] Setting default-storageclass=true in profile "minikube"
I1205 18:46:48.907373 392706 addons.go:69] Setting storage-provisioner=true in profile "minikube"
I1205 18:46:48.907380 392706 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I1205 18:46:48.907389 392706 addons.go:234] Setting addon storage-provisioner=true in "minikube"
I1205 18:46:48.907395 392706 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
I1205 18:46:48.907440 392706 host.go:66] Checking if "minikube" exists ...
I1205 18:46:48.907448 392706 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
I1205 18:46:48.907457 392706 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 18:46:48.907536 392706 addons.go:69] Setting metrics-server=true in profile "minikube"
I1205 18:46:48.907559 392706 addons.go:234] Setting addon metrics-server=true in "minikube"
I1205 18:46:48.907577 392706 host.go:66] Checking if "minikube" exists ...
I1205 18:46:48.907601 392706 addons.go:69] Setting cloud-spanner=true in profile "minikube"
I1205 18:46:48.907631 392706 addons.go:234] Setting addon cloud-spanner=true in "minikube"
I1205 18:46:48.907663 392706 host.go:66] Checking if "minikube" exists ...
I1205 18:46:48.907734 392706 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
I1205 18:46:48.907813 392706 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
I1205 18:46:48.907860 392706 host.go:66] Checking if "minikube" exists ...
I1205 18:46:48.908210 392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I1205 18:46:48.908227 392706 api_server.go:166] Checking apiserver status ...
I1205 18:46:48.908229 392706 addons.go:69] Setting volcano=true in profile "minikube"
I1205 18:46:48.908243 392706 addons.go:234] Setting addon volcano=true in "minikube"
I1205 18:46:48.908264 392706 host.go:66] Checking if "minikube" exists ...
I1205 18:46:48.908268 392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 18:46:48.908393 392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I1205 18:46:48.908408 392706 api_server.go:166] Checking apiserver status ...
I1205 18:46:48.908434 392706 addons.go:69] Setting amd-gpu-device-plugin=true in profile "minikube"
I1205 18:46:48.908449 392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 18:46:48.908457 392706 addons.go:234] Setting addon amd-gpu-device-plugin=true in "minikube"
I1205 18:46:48.908470 392706 addons.go:69] Setting volumesnapshots=true in profile "minikube"
I1205 18:46:48.908483 392706 host.go:66] Checking if "minikube" exists ...
I1205 18:46:48.908491 392706 addons.go:234] Setting addon volumesnapshots=true in "minikube"
I1205 18:46:48.908547 392706 host.go:66] Checking if "minikube" exists ...
I1205 18:46:48.908547 392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I1205 18:46:48.908545 392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I1205 18:46:48.908633 392706 out.go:177] * Configuring local host environment ...
I1205 18:46:48.908643 392706 api_server.go:166] Checking apiserver status ...
I1205 18:46:48.908706 392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 18:46:48.908725 392706 api_server.go:166] Checking apiserver status ...
I1205 18:46:48.908767 392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 18:46:48.908928 392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I1205 18:46:48.908946 392706 api_server.go:166] Checking apiserver status ...
I1205 18:46:48.908976 392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 18:46:48.909233 392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I1205 18:46:48.909250 392706 api_server.go:166] Checking apiserver status ...
I1205 18:46:48.909278 392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 18:46:48.909368 392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I1205 18:46:48.909384 392706 api_server.go:166] Checking apiserver status ...
I1205 18:46:48.909417 392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 18:46:48.907521 392706 addons.go:69] Setting gcp-auth=true in profile "minikube"
I1205 18:46:48.909695 392706 mustload.go:65] Loading cluster: minikube
I1205 18:46:48.909999 392706 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 18:46:48.912367 392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I1205 18:46:48.912392 392706 api_server.go:166] Checking apiserver status ...
I1205 18:46:48.912428 392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 18:46:48.907502 392706 host.go:66] Checking if "minikube" exists ...
W1205 18:46:48.912765 392706 out.go:270] *
W1205 18:46:48.912804 392706 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
W1205 18:46:48.912820 392706 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
W1205 18:46:48.912831 392706 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
W1205 18:46:48.912838 392706 out.go:270] *
W1205 18:46:48.912922 392706 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
W1205 18:46:48.912959 392706 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
W1205 18:46:48.912998 392706 out.go:270] *
W1205 18:46:48.913085 392706 out.go:270] - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
I1205 18:46:48.913564 392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I1205 18:46:48.907359 392706 addons.go:69] Setting registry=true in profile "minikube"
I1205 18:46:48.907345 392706 addons.go:69] Setting yakd=true in profile "minikube"
W1205 18:46:48.913729 392706 out.go:270] - sudo chown -R $USER $HOME/.kube $HOME/.minikube
W1205 18:46:48.914154 392706 out.go:270] *
W1205 18:46:48.914172 392706 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
I1205 18:46:48.914201 392706 start.go:235] Will wait 6m0s for node &{Name: IP:10.128.15.240 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I1205 18:46:48.908213 392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I1205 18:46:48.914468 392706 addons.go:234] Setting addon registry=true in "minikube"
I1205 18:46:48.914513 392706 host.go:66] Checking if "minikube" exists ...
I1205 18:46:48.914468 392706 api_server.go:166] Checking apiserver status ...
I1205 18:46:48.907528 392706 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
I1205 18:46:48.914612 392706 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
I1205 18:46:48.914643 392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 18:46:48.914654 392706 host.go:66] Checking if "minikube" exists ...
I1205 18:46:48.914451 392706 api_server.go:166] Checking apiserver status ...
I1205 18:46:48.914441 392706 addons.go:234] Setting addon yakd=true in "minikube"
I1205 18:46:48.914953 392706 host.go:66] Checking if "minikube" exists ...
I1205 18:46:48.915069 392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 18:46:48.915366 392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I1205 18:46:48.915412 392706 api_server.go:166] Checking apiserver status ...
I1205 18:46:48.915456 392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I1205 18:46:48.915473 392706 api_server.go:166] Checking apiserver status ...
I1205 18:46:48.915496 392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 18:46:48.915460 392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 18:46:48.915681 392706 out.go:177] * Verifying Kubernetes components...
I1205 18:46:48.917216 392706 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1205 18:46:48.941497 392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I1205 18:46:48.941536 392706 api_server.go:166] Checking apiserver status ...
I1205 18:46:48.941594 392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 18:46:48.946352 392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
I1205 18:46:48.946914 392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
I1205 18:46:48.948558 392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
I1205 18:46:48.948914 392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
I1205 18:46:48.949750 392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
I1205 18:46:48.951500 392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
I1205 18:46:48.951704 392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
I1205 18:46:48.952149 392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
I1205 18:46:48.963660 392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
I1205 18:46:48.965160 392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
I1205 18:46:48.965908 392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
I1205 18:46:48.966082 392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
I1205 18:46:48.966774 392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
I1205 18:46:48.966836 392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
I1205 18:46:48.976553 392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
I1205 18:46:48.976631 392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
I1205 18:46:48.977010 392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
I1205 18:46:48.977075 392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
I1205 18:46:48.980250 392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
I1205 18:46:48.980278 392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
I1205 18:46:48.980328 392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
I1205 18:46:48.980918 392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
I1205 18:46:48.980989 392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
I1205 18:46:48.981886 392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
I1205 18:46:48.981945 392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
I1205 18:46:48.985926 392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
I1205 18:46:48.985997 392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
I1205 18:46:48.987904 392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
I1205 18:46:48.987980 392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
I1205 18:46:48.989439 392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
I1205 18:46:48.994571 392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
I1205 18:46:48.995201 392706 api_server.go:204] freezer state: "THAWED"
I1205 18:46:48.995232 392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I1205 18:46:48.998172 392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
I1205 18:46:48.998262 392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
I1205 18:46:48.999236 392706 api_server.go:204] freezer state: "THAWED"
I1205 18:46:48.999283 392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I1205 18:46:49.005026 392706 api_server.go:204] freezer state: "THAWED"
I1205 18:46:49.005060 392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I1205 18:46:49.005288 392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I1205 18:46:49.005462 392706 api_server.go:204] freezer state: "THAWED"
I1205 18:46:49.005514 392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I1205 18:46:49.005544 392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I1205 18:46:49.005565 392706 host.go:66] Checking if "minikube" exists ...
I1205 18:46:49.007372 392706 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I1205 18:46:49.008025 392706 api_server.go:204] freezer state: "THAWED"
I1205 18:46:49.008051 392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I1205 18:46:49.009172 392706 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1205 18:46:49.009207 392706 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1205 18:46:49.009496 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1566787695 /etc/kubernetes/addons/metrics-apiservice.yaml
I1205 18:46:49.010124 392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
I1205 18:46:49.010142 392706 api_server.go:204] freezer state: "THAWED"
I1205 18:46:49.010160 392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I1205 18:46:49.010213 392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
I1205 18:46:49.012763 392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I1205 18:46:49.012966 392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I1205 18:46:49.013656 392706 api_server.go:204] freezer state: "THAWED"
I1205 18:46:49.013682 392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I1205 18:46:49.014209 392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I1205 18:46:49.015871 392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I1205 18:46:49.015993 392706 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
I1205 18:46:49.016070 392706 out.go:177] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1205 18:46:49.016096 392706 api_server.go:204] freezer state: "THAWED"
I1205 18:46:49.016778 392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I1205 18:46:49.016256 392706 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
I1205 18:46:49.017593 392706 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1205 18:46:49.018824 392706 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1205 18:46:49.018857 392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1205 18:46:49.018993 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3553066936 /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1205 18:46:49.019134 392706 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
I1205 18:46:49.019354 392706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1205 18:46:49.019384 392706 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1205 18:46:49.019516 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1222195174 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1205 18:46:49.020245 392706 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1205 18:46:49.020351 392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1205 18:46:49.020771 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3954816694 /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1205 18:46:49.023896 392706 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
I1205 18:46:49.024137 392706 api_server.go:204] freezer state: "THAWED"
I1205 18:46:49.024836 392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I1205 18:46:49.024617 392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I1205 18:46:49.024662 392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I1205 18:46:49.026372 392706 addons.go:234] Setting addon default-storageclass=true in "minikube"
I1205 18:46:49.026459 392706 host.go:66] Checking if "minikube" exists ...
I1205 18:46:49.027421 392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I1205 18:46:49.027448 392706 api_server.go:166] Checking apiserver status ...
I1205 18:46:49.027487 392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 18:46:49.028033 392706 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
I1205 18:46:49.029726 392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I1205 18:46:49.031323 392706 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1205 18:46:49.031602 392706 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I1205 18:46:49.031638 392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1205 18:46:49.031798 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1080143081 /etc/kubernetes/addons/deployment.yaml
I1205 18:46:49.031987 392706 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I1205 18:46:49.032032 392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
I1205 18:46:49.033374 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3818939200 /etc/kubernetes/addons/volcano-deployment.yaml
I1205 18:46:49.036041 392706 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1205 18:46:49.038374 392706 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1205 18:46:49.041266 392706 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1205 18:46:49.043944 392706 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1205 18:46:49.044080 392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
I1205 18:46:49.045968 392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
I1205 18:46:49.048487 392706 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1205 18:46:49.050241 392706 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1205 18:46:49.051549 392706 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1205 18:46:49.052687 392706 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1205 18:46:49.052729 392706 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1205 18:46:49.053285 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube715282980 /etc/kubernetes/addons/rbac-external-attacher.yaml
I1205 18:46:49.056522 392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
I1205 18:46:49.056589 392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
I1205 18:46:49.060391 392706 api_server.go:204] freezer state: "THAWED"
I1205 18:46:49.060425 392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I1205 18:46:49.060883 392706 api_server.go:204] freezer state: "THAWED"
I1205 18:46:49.061035 392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I1205 18:46:49.061319 392706 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1205 18:46:49.061356 392706 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1205 18:46:49.061413 392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1205 18:46:49.061642 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2498433628 /etc/kubernetes/addons/metrics-server-deployment.yaml
I1205 18:46:49.065196 392706 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1205 18:46:49.065232 392706 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1205 18:46:49.065398 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube287044299 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1205 18:46:49.066327 392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I1205 18:46:49.066762 392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1205 18:46:49.067381 392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I1205 18:46:49.069414 392706 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1205 18:46:49.071215 392706 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1205 18:46:49.071242 392706 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
I1205 18:46:49.071251 392706 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
I1205 18:46:49.071295 392706 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
I1205 18:46:49.072963 392706 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
I1205 18:46:49.076971 392706 out.go:177] - Using image docker.io/registry:2.8.3
I1205 18:46:49.080424 392706 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I1205 18:46:49.080467 392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1205 18:46:49.080620 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube771930884 /etc/kubernetes/addons/registry-rc.yaml
I1205 18:46:49.083624 392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
I1205 18:46:49.084536 392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1205 18:46:49.085611 392706 api_server.go:204] freezer state: "THAWED"
I1205 18:46:49.085636 392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I1205 18:46:49.087832 392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I1205 18:46:49.090366 392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I1205 18:46:49.090582 392706 api_server.go:204] freezer state: "THAWED"
I1205 18:46:49.090598 392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I1205 18:46:49.094185 392706 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I1205 18:46:49.096408 392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I1205 18:46:49.096702 392706 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1205 18:46:49.096723 392706 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1205 18:46:49.096739 392706 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1205 18:46:49.096739 392706 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1205 18:46:49.096873 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1450247784 /etc/kubernetes/addons/rbac-hostpath.yaml
I1205 18:46:49.096992 392706 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I1205 18:46:49.097012 392706 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1205 18:46:49.097097 392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1205 18:46:49.096873 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4203213854 /etc/kubernetes/addons/metrics-server-rbac.yaml
I1205 18:46:49.097797 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube384872491 /etc/kubernetes/addons/yakd-ns.yaml
I1205 18:46:49.098393 392706 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
I1205 18:46:49.108468 392706 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I1205 18:46:49.108523 392706 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
I1205 18:46:49.109342 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3725792852 /etc/kubernetes/addons/ig-crd.yaml
I1205 18:46:49.111006 392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1205 18:46:49.111225 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1391726845 /etc/kubernetes/addons/storage-provisioner.yaml
I1205 18:46:49.112674 392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
I1205 18:46:49.112755 392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
I1205 18:46:49.128271 392706 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I1205 18:46:49.128862 392706 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1205 18:46:49.131501 392706 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1205 18:46:49.131531 392706 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1205 18:46:49.131718 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3682795093 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1205 18:46:49.132624 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3731156731 /etc/kubernetes/addons/registry-svc.yaml
I1205 18:46:49.134720 392706 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1205 18:46:49.134754 392706 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1205 18:46:49.134897 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1030573102 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1205 18:46:49.135070 392706 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1205 18:46:49.135099 392706 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1205 18:46:49.135463 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2114620705 /etc/kubernetes/addons/metrics-server-service.yaml
I1205 18:46:49.144222 392706 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I1205 18:46:49.144265 392706 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1205 18:46:49.144436 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1521659921 /etc/kubernetes/addons/yakd-sa.yaml
I1205 18:46:49.165534 392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1205 18:46:49.169249 392706 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I1205 18:46:49.169283 392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1205 18:46:49.169413 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2599884371 /etc/kubernetes/addons/registry-proxy.yaml
I1205 18:46:49.169752 392706 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I1205 18:46:49.169772 392706 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1205 18:46:49.169876 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube103325063 /etc/kubernetes/addons/yakd-crb.yaml
I1205 18:46:49.172012 392706 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
I1205 18:46:49.172043 392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
I1205 18:46:49.172214 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3021882922 /etc/kubernetes/addons/ig-deployment.yaml
I1205 18:46:49.181424 392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1205 18:46:49.182529 392706 api_server.go:204] freezer state: "THAWED"
I1205 18:46:49.182563 392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I1205 18:46:49.187575 392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I1205 18:46:49.187635 392706 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I1205 18:46:49.187656 392706 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
I1205 18:46:49.187664 392706 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
I1205 18:46:49.187712 392706 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
I1205 18:46:49.190736 392706 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1205 18:46:49.190774 392706 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1205 18:46:49.190941 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2780910546 /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1205 18:46:49.192778 392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1205 18:46:49.193268 392706 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1205 18:46:49.193308 392706 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1205 18:46:49.193471 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2786149826 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1205 18:46:49.195126 392706 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I1205 18:46:49.195159 392706 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1205 18:46:49.195294 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1224436272 /etc/kubernetes/addons/yakd-svc.yaml
I1205 18:46:49.208251 392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1205 18:46:49.224055 392706 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1205 18:46:49.224110 392706 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1205 18:46:49.224301 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2754282917 /etc/kubernetes/addons/rbac-external-resizer.yaml
I1205 18:46:49.242523 392706 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1205 18:46:49.242702 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1320033730 /etc/kubernetes/addons/storageclass.yaml
I1205 18:46:49.258586 392706 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1205 18:46:49.258633 392706 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1205 18:46:49.259110 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube429407668 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1205 18:46:49.278307 392706 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1205 18:46:49.278351 392706 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1205 18:46:49.278504 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3309282558 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1205 18:46:49.301433 392706 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I1205 18:46:49.301479 392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1205 18:46:49.301635 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1214070606 /etc/kubernetes/addons/yakd-dp.yaml
I1205 18:46:49.307202 392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1205 18:46:49.322344 392706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1205 18:46:49.322385 392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1205 18:46:49.322592 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1717115819 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1205 18:46:49.352979 392706 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1205 18:46:49.353020 392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1205 18:46:49.353171 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1446112244 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1205 18:46:49.378340 392706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1205 18:46:49.378391 392706 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1205 18:46:49.378555 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3957832376 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1205 18:46:49.384608 392706 exec_runner.go:51] Run: sudo systemctl start kubelet
I1205 18:46:49.409193 392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1205 18:46:49.409375 392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1205 18:46:49.519758 392706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1205 18:46:49.519895 392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1205 18:46:49.520566 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1367396723 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1205 18:46:49.543940 392706 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-15" to be "Ready" ...
I1205 18:46:49.547610 392706 node_ready.go:49] node "ubuntu-20-agent-15" has status "Ready":"True"
I1205 18:46:49.547634 392706 node_ready.go:38] duration metric: took 3.656512ms for node "ubuntu-20-agent-15" to be "Ready" ...
I1205 18:46:49.547649 392706 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1205 18:46:49.559036 392706 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-jjc5z" in "kube-system" namespace to be "Ready" ...
I1205 18:46:49.566396 392706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1205 18:46:49.566444 392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1205 18:46:49.566621 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2010582015 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1205 18:46:49.575763 392706 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
I1205 18:46:49.638685 392706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1205 18:46:49.638725 392706 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1205 18:46:49.638898 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3300979419 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1205 18:46:49.796061 392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1205 18:46:50.080153 392706 addons.go:475] Verifying addon registry=true in "minikube"
I1205 18:46:50.081204 392706 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
I1205 18:46:50.086096 392706 out.go:177] * Verifying registry addon...
I1205 18:46:50.103769 392706 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1205 18:46:50.108866 392706 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1205 18:46:50.109050 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:46:50.349124 392706 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.16763377s)
I1205 18:46:50.349171 392706 addons.go:475] Verifying addon metrics-server=true in "minikube"
I1205 18:46:50.430569 392706 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.264982863s)
I1205 18:46:50.488697 392706 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.280391185s)
I1205 18:46:50.607892 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:46:50.729988 392706 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.32055571s)
I1205 18:46:50.732160 392706 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube service yakd-dashboard -n yakd-dashboard
I1205 18:46:51.134079 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:46:51.261689 392706 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.852152198s)
W1205 18:46:51.261990 392706 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1205 18:46:51.262937 392706 retry.go:31] will retry after 308.794095ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1205 18:46:51.566243 392706 pod_ready.go:103] pod "amd-gpu-device-plugin-jjc5z" in "kube-system" namespace has status "Ready":"False"
I1205 18:46:51.572512 392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1205 18:46:51.609726 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:46:52.122818 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:46:52.454449 392706 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.658312859s)
I1205 18:46:52.454497 392706 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
I1205 18:46:52.456369 392706 out.go:177] * Verifying csi-hostpath-driver addon...
I1205 18:46:52.459655 392706 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1205 18:46:52.473546 392706 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1205 18:46:52.473580 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:46:52.483248 392706 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.395369671s)
I1205 18:46:52.609342 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:46:52.972944 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:46:53.109078 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:46:53.464673 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:46:53.565278 392706 pod_ready.go:93] pod "amd-gpu-device-plugin-jjc5z" in "kube-system" namespace has status "Ready":"True"
I1205 18:46:53.565300 392706 pod_ready.go:82] duration metric: took 4.006135965s for pod "amd-gpu-device-plugin-jjc5z" in "kube-system" namespace to be "Ready" ...
I1205 18:46:53.565311 392706 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fwgcs" in "kube-system" namespace to be "Ready" ...
I1205 18:46:53.607858 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:46:53.963829 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:46:54.108586 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:46:54.431194 392706 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.858625696s)
I1205 18:46:54.465140 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:46:54.608155 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:46:54.966090 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:46:55.109011 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:46:55.465700 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:46:55.571183 392706 pod_ready.go:103] pod "coredns-7c65d6cfc9-fwgcs" in "kube-system" namespace has status "Ready":"False"
I1205 18:46:55.609061 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:46:55.965662 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:46:56.015583 392706 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1205 18:46:56.015749 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2113576741 /var/lib/minikube/google_application_credentials.json
I1205 18:46:56.028499 392706 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1205 18:46:56.028661 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2241339044 /var/lib/minikube/google_cloud_project
I1205 18:46:56.041206 392706 addons.go:234] Setting addon gcp-auth=true in "minikube"
I1205 18:46:56.041297 392706 host.go:66] Checking if "minikube" exists ...
I1205 18:46:56.042224 392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I1205 18:46:56.042255 392706 api_server.go:166] Checking apiserver status ...
I1205 18:46:56.042296 392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 18:46:56.065444 392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
I1205 18:46:56.079882 392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
I1205 18:46:56.079965 392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
I1205 18:46:56.092754 392706 api_server.go:204] freezer state: "THAWED"
I1205 18:46:56.092795 392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I1205 18:46:56.098421 392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I1205 18:46:56.098508 392706 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
I1205 18:46:56.102542 392706 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I1205 18:46:56.104274 392706 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1205 18:46:56.105737 392706 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1205 18:46:56.105785 392706 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1205 18:46:56.105949 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2805309064 /etc/kubernetes/addons/gcp-auth-ns.yaml
I1205 18:46:56.109347 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:46:56.120808 392706 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1205 18:46:56.120858 392706 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1205 18:46:56.121022 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2640671215 /etc/kubernetes/addons/gcp-auth-service.yaml
I1205 18:46:56.133706 392706 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1205 18:46:56.133739 392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1205 18:46:56.133858 392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3351501913 /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1205 18:46:56.146241 392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1205 18:46:56.465367 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:46:56.570596 392706 addons.go:475] Verifying addon gcp-auth=true in "minikube"
I1205 18:46:56.572698 392706 out.go:177] * Verifying gcp-auth addon...
I1205 18:46:56.574946 392706 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1205 18:46:56.578226 392706 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1205 18:46:56.680479 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:46:56.965308 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:46:57.107918 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:46:57.466014 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:46:57.702153 392706 pod_ready.go:103] pod "coredns-7c65d6cfc9-fwgcs" in "kube-system" namespace has status "Ready":"False"
I1205 18:46:57.702829 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:46:57.972855 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:46:58.107912 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:46:58.464740 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:46:58.568513 392706 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-fwgcs" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-fwgcs" not found
I1205 18:46:58.568550 392706 pod_ready.go:82] duration metric: took 5.003232034s for pod "coredns-7c65d6cfc9-fwgcs" in "kube-system" namespace to be "Ready" ...
E1205 18:46:58.568567 392706 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-fwgcs" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-fwgcs" not found
I1205 18:46:58.568576 392706 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zk8jj" in "kube-system" namespace to be "Ready" ...
I1205 18:46:58.574604 392706 pod_ready.go:93] pod "coredns-7c65d6cfc9-zk8jj" in "kube-system" namespace has status "Ready":"True"
I1205 18:46:58.574633 392706 pod_ready.go:82] duration metric: took 6.048206ms for pod "coredns-7c65d6cfc9-zk8jj" in "kube-system" namespace to be "Ready" ...
I1205 18:46:58.574645 392706 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
I1205 18:46:58.582640 392706 pod_ready.go:93] pod "etcd-ubuntu-20-agent-15" in "kube-system" namespace has status "Ready":"True"
I1205 18:46:58.582668 392706 pod_ready.go:82] duration metric: took 8.015057ms for pod "etcd-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
I1205 18:46:58.582682 392706 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
I1205 18:46:58.587364 392706 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-15" in "kube-system" namespace has status "Ready":"True"
I1205 18:46:58.587391 392706 pod_ready.go:82] duration metric: took 4.700049ms for pod "kube-apiserver-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
I1205 18:46:58.587404 392706 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
I1205 18:46:58.592047 392706 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-15" in "kube-system" namespace has status "Ready":"True"
I1205 18:46:58.592069 392706 pod_ready.go:82] duration metric: took 4.65891ms for pod "kube-controller-manager-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
I1205 18:46:58.592079 392706 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-469rp" in "kube-system" namespace to be "Ready" ...
I1205 18:46:58.607580 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:46:58.769820 392706 pod_ready.go:93] pod "kube-proxy-469rp" in "kube-system" namespace has status "Ready":"True"
I1205 18:46:58.769856 392706 pod_ready.go:82] duration metric: took 177.768557ms for pod "kube-proxy-469rp" in "kube-system" namespace to be "Ready" ...
I1205 18:46:58.769873 392706 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
I1205 18:46:58.965222 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:46:59.108651 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:46:59.169189 392706 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-15" in "kube-system" namespace has status "Ready":"True"
I1205 18:46:59.169271 392706 pod_ready.go:82] duration metric: took 399.388808ms for pod "kube-scheduler-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
I1205 18:46:59.169292 392706 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-ztwcn" in "kube-system" namespace to be "Ready" ...
I1205 18:46:59.465083 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:46:59.680775 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:46:59.965320 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:00.108296 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:47:00.464288 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:00.607729 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:47:00.965067 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:01.199066 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:47:01.203216 392706 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-ztwcn" in "kube-system" namespace has status "Ready":"False"
I1205 18:47:01.464207 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:01.608182 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:47:01.965448 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:02.107681 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:47:02.465293 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:02.608187 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:47:02.675281 392706 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-ztwcn" in "kube-system" namespace has status "Ready":"True"
I1205 18:47:02.675312 392706 pod_ready.go:82] duration metric: took 3.506010384s for pod "nvidia-device-plugin-daemonset-ztwcn" in "kube-system" namespace to be "Ready" ...
I1205 18:47:02.675325 392706 pod_ready.go:39] duration metric: took 13.127659766s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1205 18:47:02.675351 392706 api_server.go:52] waiting for apiserver process to appear ...
I1205 18:47:02.675427 392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 18:47:02.695187 392706 api_server.go:72] duration metric: took 13.78094142s to wait for apiserver process to appear ...
I1205 18:47:02.695218 392706 api_server.go:88] waiting for apiserver healthz status ...
I1205 18:47:02.695249 392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I1205 18:47:02.699816 392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I1205 18:47:02.700810 392706 api_server.go:141] control plane version: v1.31.2
I1205 18:47:02.700838 392706 api_server.go:131] duration metric: took 5.610942ms to wait for apiserver health ...
I1205 18:47:02.700849 392706 system_pods.go:43] waiting for kube-system pods to appear ...
I1205 18:47:02.708782 392706 system_pods.go:59] 17 kube-system pods found
I1205 18:47:02.708834 392706 system_pods.go:61] "amd-gpu-device-plugin-jjc5z" [f828cfe8-480f-42a5-8e47-eb2a2e5f4a1e] Running
I1205 18:47:02.708845 392706 system_pods.go:61] "coredns-7c65d6cfc9-zk8jj" [7adf42d9-14af-4e94-adae-b04af746e283] Running
I1205 18:47:02.708856 392706 system_pods.go:61] "csi-hostpath-attacher-0" [7d308609-1109-41ee-919c-93fefc7b9d56] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1205 18:47:02.708874 392706 system_pods.go:61] "csi-hostpath-resizer-0" [68b7b7f0-6085-4d0a-a17c-4a86015fa4ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1205 18:47:02.708890 392706 system_pods.go:61] "csi-hostpathplugin-6l6p5" [72ecf43c-9c33-4354-bb62-a25130b9ed65] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1205 18:47:02.708928 392706 system_pods.go:61] "etcd-ubuntu-20-agent-15" [662d8ffa-fc3a-41fc-a149-15e7136dc6ad] Running
I1205 18:47:02.708936 392706 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-15" [6fdda757-9d19-4e58-a9de-3eb01f3c222d] Running
I1205 18:47:02.708944 392706 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-15" [77968090-16a4-4af1-a253-1b9c1c84b83f] Running
I1205 18:47:02.708949 392706 system_pods.go:61] "kube-proxy-469rp" [0f95cbc3-0d36-4d85-b1a3-3271dbb30d28] Running
I1205 18:47:02.708954 392706 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-15" [e7f18375-954d-47e5-badf-a043eb4a045b] Running
I1205 18:47:02.708962 392706 system_pods.go:61] "metrics-server-84c5f94fbc-4rstm" [dfef15df-0ac2-42d6-ae56-67fdb95b6a8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1205 18:47:02.708968 392706 system_pods.go:61] "nvidia-device-plugin-daemonset-ztwcn" [95079423-3a8c-43d2-af27-55852564e9ae] Running
I1205 18:47:02.708977 392706 system_pods.go:61] "registry-66c9cd494c-jgf47" [9f55f79d-b172-464c-9881-382ccbd93912] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1205 18:47:02.708984 392706 system_pods.go:61] "registry-proxy-wl4vl" [5cf2fdd8-e0ad-481c-b4ee-4307a7236b36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1205 18:47:02.708995 392706 system_pods.go:61] "snapshot-controller-56fcc65765-ksj7l" [88702745-5bf8-4e07-a722-327cdbc69b9e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1205 18:47:02.709005 392706 system_pods.go:61] "snapshot-controller-56fcc65765-v98wh" [fe983053-ea62-4dc3-9c0f-ecd39b63919e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1205 18:47:02.709011 392706 system_pods.go:61] "storage-provisioner" [b74c2937-c6b0-4e32-b3f8-b9b13659a848] Running
I1205 18:47:02.709021 392706 system_pods.go:74] duration metric: took 8.163275ms to wait for pod list to return data ...
I1205 18:47:02.709031 392706 default_sa.go:34] waiting for default service account to be created ...
I1205 18:47:02.769708 392706 default_sa.go:45] found service account: "default"
I1205 18:47:02.769738 392706 default_sa.go:55] duration metric: took 60.698722ms for default service account to be created ...
I1205 18:47:02.769752 392706 system_pods.go:116] waiting for k8s-apps to be running ...
I1205 18:47:02.964997 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:02.975075 392706 system_pods.go:86] 17 kube-system pods found
I1205 18:47:02.975111 392706 system_pods.go:89] "amd-gpu-device-plugin-jjc5z" [f828cfe8-480f-42a5-8e47-eb2a2e5f4a1e] Running
I1205 18:47:02.975121 392706 system_pods.go:89] "coredns-7c65d6cfc9-zk8jj" [7adf42d9-14af-4e94-adae-b04af746e283] Running
I1205 18:47:02.975132 392706 system_pods.go:89] "csi-hostpath-attacher-0" [7d308609-1109-41ee-919c-93fefc7b9d56] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1205 18:47:02.975142 392706 system_pods.go:89] "csi-hostpath-resizer-0" [68b7b7f0-6085-4d0a-a17c-4a86015fa4ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1205 18:47:02.975153 392706 system_pods.go:89] "csi-hostpathplugin-6l6p5" [72ecf43c-9c33-4354-bb62-a25130b9ed65] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1205 18:47:02.975162 392706 system_pods.go:89] "etcd-ubuntu-20-agent-15" [662d8ffa-fc3a-41fc-a149-15e7136dc6ad] Running
I1205 18:47:02.975169 392706 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-15" [6fdda757-9d19-4e58-a9de-3eb01f3c222d] Running
I1205 18:47:02.975179 392706 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-15" [77968090-16a4-4af1-a253-1b9c1c84b83f] Running
I1205 18:47:02.975185 392706 system_pods.go:89] "kube-proxy-469rp" [0f95cbc3-0d36-4d85-b1a3-3271dbb30d28] Running
I1205 18:47:02.975194 392706 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-15" [e7f18375-954d-47e5-badf-a043eb4a045b] Running
I1205 18:47:02.975205 392706 system_pods.go:89] "metrics-server-84c5f94fbc-4rstm" [dfef15df-0ac2-42d6-ae56-67fdb95b6a8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1205 18:47:02.975217 392706 system_pods.go:89] "nvidia-device-plugin-daemonset-ztwcn" [95079423-3a8c-43d2-af27-55852564e9ae] Running
I1205 18:47:02.975229 392706 system_pods.go:89] "registry-66c9cd494c-jgf47" [9f55f79d-b172-464c-9881-382ccbd93912] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1205 18:47:02.975239 392706 system_pods.go:89] "registry-proxy-wl4vl" [5cf2fdd8-e0ad-481c-b4ee-4307a7236b36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1205 18:47:02.975248 392706 system_pods.go:89] "snapshot-controller-56fcc65765-ksj7l" [88702745-5bf8-4e07-a722-327cdbc69b9e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1205 18:47:02.975260 392706 system_pods.go:89] "snapshot-controller-56fcc65765-v98wh" [fe983053-ea62-4dc3-9c0f-ecd39b63919e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1205 18:47:02.975270 392706 system_pods.go:89] "storage-provisioner" [b74c2937-c6b0-4e32-b3f8-b9b13659a848] Running
I1205 18:47:02.975282 392706 system_pods.go:126] duration metric: took 205.521624ms to wait for k8s-apps to be running ...
I1205 18:47:02.975295 392706 system_svc.go:44] waiting for kubelet service to be running ....
I1205 18:47:02.975356 392706 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I1205 18:47:02.990603 392706 system_svc.go:56] duration metric: took 15.292495ms WaitForService to wait for kubelet
I1205 18:47:02.990640 392706 kubeadm.go:582] duration metric: took 14.07640579s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1205 18:47:02.990667 392706 node_conditions.go:102] verifying NodePressure condition ...
I1205 18:47:03.108115 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:47:03.169970 392706 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1205 18:47:03.170002 392706 node_conditions.go:123] node cpu capacity is 8
I1205 18:47:03.170020 392706 node_conditions.go:105] duration metric: took 179.34669ms to run NodePressure ...
I1205 18:47:03.170043 392706 start.go:241] waiting for startup goroutines ...
I1205 18:47:03.463884 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:03.686072 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:47:03.964928 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:04.107991 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:47:04.464275 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:04.607849 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:47:04.964792 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:05.108046 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:47:05.465596 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:05.608363 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:47:05.965279 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:06.107814 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:47:06.464838 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:06.610026 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:47:06.968972 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:07.108223 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:47:07.464758 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:07.680442 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:47:07.964953 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:08.113963 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:47:08.465378 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:08.608570 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:47:08.964880 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:09.108617 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:47:09.464583 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:09.608998 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:47:09.966051 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:10.107312 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1205 18:47:10.465094 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:10.608143 392706 kapi.go:107] duration metric: took 20.504365742s to wait for kubernetes.io/minikube-addons=registry ...
I1205 18:47:10.964472 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:11.465172 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:11.965113 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:12.464521 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:12.986720 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:13.465033 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:13.973776 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:14.464463 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:14.965448 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:15.464973 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:15.987260 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:16.465299 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:16.964389 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:17.465827 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:17.963594 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:18.465151 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:18.964635 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:19.465520 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:19.965957 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:20.464529 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:20.965098 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:21.464325 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:21.964858 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:22.464236 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:23.024941 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:23.464410 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:23.966106 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:24.465553 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:24.978106 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:25.465212 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:25.965534 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1205 18:47:26.464843 392706 kapi.go:107] duration metric: took 34.00519142s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1205 18:47:38.079615 392706 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1205 18:47:38.079645 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:38.579029 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:39.078461 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:39.579164 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:40.078509 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:40.579142 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:41.078837 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:41.579629 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:42.079368 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:42.578774 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:43.079590 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:43.578935 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:44.078438 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:44.579111 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:45.078429 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:45.579466 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:46.079197 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:46.578586 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:47.078792 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:47.578137 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:48.078503 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:48.578681 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:49.078311 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:49.578854 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:50.078476 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:50.577860 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:51.078428 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:51.578461 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:52.079127 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:52.578218 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:53.078698 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:53.579828 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:54.079310 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:54.578693 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:55.078902 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:55.578241 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:56.078879 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:56.579057 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:57.078280 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:57.578818 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:58.078572 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:58.579097 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:59.078187 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:47:59.578963 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:00.078946 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:00.580637 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:01.078266 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:01.578547 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:02.078808 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:02.580110 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:03.079414 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:03.578832 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:04.079228 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:04.578833 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:05.078002 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:05.578226 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:06.078959 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:06.578244 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:07.078639 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:07.579331 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:08.079225 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:08.578036 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:09.078692 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:09.579389 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:10.079380 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:10.579194 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:11.121304 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:11.578338 392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1205 18:48:12.079264 392706 kapi.go:107] duration metric: took 1m15.504316026s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1205 18:48:12.081150 392706 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
I1205 18:48:12.082748 392706 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1205 18:48:12.084069 392706 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1205 18:48:12.085580 392706 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, default-storageclass, cloud-spanner, metrics-server, storage-provisioner, inspektor-gadget, yakd, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
I1205 18:48:12.087098 392706 addons.go:510] duration metric: took 1m23.179912011s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin default-storageclass cloud-spanner metrics-server storage-provisioner inspektor-gadget yakd volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
I1205 18:48:12.087159 392706 start.go:246] waiting for cluster config update ...
I1205 18:48:12.087186 392706 start.go:255] writing updated cluster config ...
I1205 18:48:12.087461 392706 exec_runner.go:51] Run: rm -f paused
I1205 18:48:12.134843 392706 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
I1205 18:48:12.136952 392706 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
==> Docker <==
-- Logs begin at Wed 2024-10-16 18:17:53 UTC, end at Thu 2024-12-05 18:54:13 UTC. --
Dec 05 18:47:31 ubuntu-20-agent-15 cri-dockerd[393251]: time="2024-12-05T18:47:31Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: Pulling from volcanosh/vc-scheduler"
Dec 05 18:47:53 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:47:53.898387701Z" level=warning msg="reference for unknown type: " digest="sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882" remote="docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882" spanID=ee4d6794cceaad1c traceID=a9c325cca610479b2a1d37c8ac3f9081
Dec 05 18:47:54 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:47:54.098694552Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" spanID=ee4d6794cceaad1c traceID=a9c325cca610479b2a1d37c8ac3f9081
Dec 05 18:47:54 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:47:54.100453689Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" spanID=ee4d6794cceaad1c traceID=a9c325cca610479b2a1d37c8ac3f9081
Dec 05 18:48:00 ubuntu-20-agent-15 cri-dockerd[393251]: time="2024-12-05T18:48:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/67f19b0eaf6c2313b8891949cc88c86bd823a48b76f8ee6e58b250fdd30337d6/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local us-central1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Dec 05 18:48:00 ubuntu-20-agent-15 cri-dockerd[393251]: time="2024-12-05T18:48:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/082309ae8953784f62149319fa1a2c3c6ecdf57ca123adc5d9481774d8f83ef1/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local us-central1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Dec 05 18:48:00 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:48:00.224318012Z" level=warning msg="reference for unknown type: " digest="sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f" remote="registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f" spanID=546ae4ccd364f69d traceID=049b34ed9a1a4f532ef43f3545b9166e
Dec 05 18:48:01 ubuntu-20-agent-15 cri-dockerd[393251]: time="2024-12-05T18:48:01Z" level=info msg="Stop pulling image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f: Status: Downloaded newer image for registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f"
Dec 05 18:48:01 ubuntu-20-agent-15 cri-dockerd[393251]: time="2024-12-05T18:48:01Z" level=info msg="Stop pulling image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f: Status: Image is up to date for registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f"
Dec 05 18:48:01 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:48:01.349242433Z" level=info msg="ignoring event" container=676b89476343e14831156b16216ea7c8ac2802cea18a71bdfe50fa6ac92ab5f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 05 18:48:01 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:48:01.384561184Z" level=info msg="ignoring event" container=de60c0d4958cd572f7b2dc193ee52a4463a8290563a3a520b8a7a20ab323c685 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 05 18:48:02 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:48:02.602121861Z" level=info msg="ignoring event" container=082309ae8953784f62149319fa1a2c3c6ecdf57ca123adc5d9481774d8f83ef1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 05 18:48:02 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:48:02.618810830Z" level=info msg="ignoring event" container=67f19b0eaf6c2313b8891949cc88c86bd823a48b76f8ee6e58b250fdd30337d6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 05 18:48:09 ubuntu-20-agent-15 cri-dockerd[393251]: time="2024-12-05T18:48:09Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4fc041be94946e82a9b10a3aea51c30f3c669f98ae9e731258e1563644663770/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local us-central1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Dec 05 18:48:09 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:48:09.990452188Z" level=warning msg="reference for unknown type: " digest="sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7" spanID=188720d6769e3297 traceID=039b4b4f5544fd290559326ba2b5ff7e
Dec 05 18:48:10 ubuntu-20-agent-15 cri-dockerd[393251]: time="2024-12-05T18:48:10Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7"
Dec 05 18:48:37 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:48:37.900647292Z" level=warning msg="reference for unknown type: " digest="sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882" remote="docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882" spanID=d0170cc20eec9a0b traceID=5d091e2e4955624a35c585db4040c34a
Dec 05 18:48:38 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:48:38.083244338Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" spanID=d0170cc20eec9a0b traceID=5d091e2e4955624a35c585db4040c34a
Dec 05 18:48:38 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:48:38.084827787Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" spanID=d0170cc20eec9a0b traceID=5d091e2e4955624a35c585db4040c34a
Dec 05 18:50:05 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:50:05.889632268Z" level=warning msg="reference for unknown type: " digest="sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882" remote="docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882" spanID=8009669076fa0948 traceID=6fdd82d9d2011f598526fa2414b7d736
Dec 05 18:50:06 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:50:06.261161558Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" spanID=8009669076fa0948 traceID=6fdd82d9d2011f598526fa2414b7d736
Dec 05 18:50:06 ubuntu-20-agent-15 cri-dockerd[393251]: time="2024-12-05T18:50:06Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: Pulling from volcanosh/vc-scheduler"
Dec 05 18:52:50 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:52:50.905746289Z" level=warning msg="reference for unknown type: " digest="sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882" remote="docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882" spanID=f45986b4ea198c64 traceID=0e66c903af9022486cfd0106a9d632c4
Dec 05 18:52:51 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:52:51.239367759Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" spanID=f45986b4ea198c64 traceID=0e66c903af9022486cfd0106a9d632c4
Dec 05 18:52:51 ubuntu-20-agent-15 cri-dockerd[393251]: time="2024-12-05T18:52:51Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: Pulling from volcanosh/vc-scheduler"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
99954234e5be8 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7 6 minutes ago Running gcp-auth 0 4fc041be94946 gcp-auth-c684cb797-s7lbj
502412b20138e volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e 6 minutes ago Running admission 0 47f6ecd2fd2be volcano-admission-5874dfdd79-2cwr4
f2f93cf204722 registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f 6 minutes ago Running csi-snapshotter 0 d007039d62135 csi-hostpathplugin-6l6p5
cbd4521542b0f registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 6 minutes ago Running csi-provisioner 0 d007039d62135 csi-hostpathplugin-6l6p5
00d4d23edd666 registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0 6 minutes ago Running liveness-probe 0 d007039d62135 csi-hostpathplugin-6l6p5
c3375237da024 registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 6 minutes ago Running hostpath 0 d007039d62135 csi-hostpathplugin-6l6p5
10377173e4c11 registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c 6 minutes ago Running node-driver-registrar 0 d007039d62135 csi-hostpathplugin-6l6p5
ab7ccb31799a0 registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b 6 minutes ago Running csi-attacher 0 ad8fe40345042 csi-hostpath-attacher-0
ebced389a49c8 registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 6 minutes ago Running csi-resizer 0 de2784bf24913 csi-hostpath-resizer-0
4c8cb924caa41 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c 6 minutes ago Running csi-external-health-monitor-controller 0 d007039d62135 csi-hostpathplugin-6l6p5
ddb1f42ada997 volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de 6 minutes ago Running volcano-controllers 0 23f6075945481 volcano-controllers-789ffc5785-6tdfl
a9de8587faae4 volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e 6 minutes ago Exited main 0 035e9234dea47 volcano-admission-init-qp6tk
579f9bc167e81 registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 7 minutes ago Running volume-snapshot-controller 0 cad6469ff0947 snapshot-controller-56fcc65765-v98wh
30c8352c57c4a registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 7 minutes ago Running volume-snapshot-controller 0 b58889c3536e7 snapshot-controller-56fcc65765-ksj7l
b2a3a5cff703d marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 7 minutes ago Running yakd 0 3918625261f2b yakd-dashboard-67d98fc6b-2nsqg
f903738f9cd99 gcr.io/k8s-minikube/kube-registry-proxy@sha256:60ab3508367ad093b4b891231572577371a29f838d61e64d7f7d093d961c862c 7 minutes ago Running registry-proxy 0 ee6edbcbea4f4 registry-proxy-wl4vl
dcb5aa8fb0b33 ghcr.io/inspektor-gadget/inspektor-gadget@sha256:102216c464091f4d9e07d825eba0b681f0d7e0ce108957028443441d3843d1fa 7 minutes ago Running gadget 0 ae87f8c81e4b8 gadget-c4wk4
6e15d539ba115 registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 7 minutes ago Running metrics-server 0 0116f3d10d3da metrics-server-84c5f94fbc-4rstm
6c02b5456a0e4 registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90 7 minutes ago Running registry 0 8a35bf838b64b registry-66c9cd494c-jgf47
db086619a6500 gcr.io/cloud-spanner-emulator/emulator@sha256:8fae494dce81f5167703b16f943dda76109195b8fc06bad1f3e952fe90a0b8d0 7 minutes ago Running cloud-spanner-emulator 0 96b55fe76be85 cloud-spanner-emulator-dc5db94f4-6mw9g
2c2e0de240cac nvcr.io/nvidia/k8s-device-plugin@sha256:7089559ce6153018806857f5049085bae15b3bf6f1c8bd19d8b12f707d087dea 7 minutes ago Running nvidia-device-plugin-ctr 0 ae9e5c418a252 nvidia-device-plugin-daemonset-ztwcn
9f3e9cc9dc1ec rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 7 minutes ago Running amd-gpu-device-plugin 0 1b6ba3cd604d5 amd-gpu-device-plugin-jjc5z
22de0a82d160f 6e38f40d628db 7 minutes ago Running storage-provisioner 0 f53537cc8a4fd storage-provisioner
39f84d6805255 c69fa2e9cbf5f 7 minutes ago Running coredns 0 29344feaa26d6 coredns-7c65d6cfc9-zk8jj
91ca714db528f 505d571f5fd56 7 minutes ago Running kube-proxy 0 b83bcdf0410aa kube-proxy-469rp
0b56e4852737c 0486b6c53a1b5 7 minutes ago Running kube-controller-manager 0 c10b76dd139f6 kube-controller-manager-ubuntu-20-agent-15
6201519c962ce 9499c9960544e 7 minutes ago Running kube-apiserver 0 8a320a6b85cc0 kube-apiserver-ubuntu-20-agent-15
26cdc8676d8c4 2e96e5913fc06 7 minutes ago Running etcd 0 4ca7309d93060 etcd-ubuntu-20-agent-15
4903d046814cf 847c7bc1a5418 7 minutes ago Running kube-scheduler 0 12d973ebe5bdc kube-scheduler-ubuntu-20-agent-15
==> coredns [39f84d680525] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
[INFO] Reloading complete
[INFO] 127.0.0.1:55626 - 2281 "HINFO IN 8470499205607441759.8113813181506174110. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025924016s
[INFO] 10.244.0.24:39724 - 27835 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000340434s
[INFO] 10.244.0.24:40270 - 3006 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000426447s
[INFO] 10.244.0.24:48655 - 2168 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00014034s
[INFO] 10.244.0.24:43600 - 28199 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000251882s
[INFO] 10.244.0.24:58879 - 23980 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000154676s
[INFO] 10.244.0.24:56351 - 38591 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000216233s
[INFO] 10.244.0.24:50110 - 28108 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004434318s
[INFO] 10.244.0.24:58329 - 40405 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004535187s
[INFO] 10.244.0.24:51710 - 17754 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003445827s
[INFO] 10.244.0.24:53740 - 24103 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004199713s
[INFO] 10.244.0.24:49876 - 51575 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004762414s
[INFO] 10.244.0.24:59094 - 11297 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004866412s
[INFO] 10.244.0.24:42394 - 21340 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00252329s
[INFO] 10.244.0.24:51465 - 3087 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002556127s
==> describe nodes <==
Name: ubuntu-20-agent-15
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ubuntu-20-agent-15
kubernetes.io/os=linux
minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
minikube.k8s.io/name=minikube
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_12_05T18_46_44_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=ubuntu-20-agent-15
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-15"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 05 Dec 2024 18:46:41 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ubuntu-20-agent-15
AcquireTime: <unset>
RenewTime: Thu, 05 Dec 2024 18:54:12 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 05 Dec 2024 18:53:21 +0000 Thu, 05 Dec 2024 18:46:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 05 Dec 2024 18:53:21 +0000 Thu, 05 Dec 2024 18:46:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 05 Dec 2024 18:53:21 +0000 Thu, 05 Dec 2024 18:46:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 05 Dec 2024 18:53:21 +0000 Thu, 05 Dec 2024 18:46:41 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.128.15.240
Hostname: ubuntu-20-agent-15
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859304Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859304Ki
pods: 110
System Info:
Machine ID: 591c9f1229383743e2bfc56a050d43d1
System UUID: b37db8a4-1476-dab1-7f0f-0d5cfb4ed197
Boot ID: 39024a98-8447-46b2-bbc5-7915429b9c2d
Kernel Version: 5.15.0-1071-gcp
OS Image: Ubuntu 20.04.6 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.3.1
Kubelet Version: v1.31.2
Kube-Proxy Version: v1.31.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (24 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default cloud-spanner-emulator-dc5db94f4-6mw9g 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m24s
gadget gadget-c4wk4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m23s
gcp-auth gcp-auth-c684cb797-s7lbj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m36s
kube-system amd-gpu-device-plugin-jjc5z 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m24s
kube-system coredns-7c65d6cfc9-zk8jj 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 7m24s
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m21s
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m21s
kube-system csi-hostpathplugin-6l6p5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m21s
kube-system etcd-ubuntu-20-agent-15 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 7m31s
kube-system kube-apiserver-ubuntu-20-agent-15 250m (3%) 0 (0%) 0 (0%) 0 (0%) 7m31s
kube-system kube-controller-manager-ubuntu-20-agent-15 200m (2%) 0 (0%) 0 (0%) 0 (0%) 7m30s
kube-system kube-proxy-469rp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m24s
kube-system kube-scheduler-ubuntu-20-agent-15 100m (1%) 0 (0%) 0 (0%) 0 (0%) 7m31s
kube-system metrics-server-84c5f94fbc-4rstm 100m (1%) 0 (0%) 200Mi (0%) 0 (0%) 7m23s
kube-system nvidia-device-plugin-daemonset-ztwcn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m24s
kube-system registry-66c9cd494c-jgf47 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m23s
kube-system registry-proxy-wl4vl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m23s
kube-system snapshot-controller-56fcc65765-ksj7l 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m22s
kube-system snapshot-controller-56fcc65765-v98wh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m22s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m23s
volcano-system volcano-admission-5874dfdd79-2cwr4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m22s
volcano-system volcano-controllers-789ffc5785-6tdfl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m21s
volcano-system volcano-scheduler-6c9778cbdf-q7mcw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m21s
yakd-dashboard yakd-dashboard-67d98fc6b-2nsqg 0 (0%) 0 (0%) 128Mi (0%) 256Mi (0%) 7m23s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%) 0 (0%)
memory 498Mi (1%) 426Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 7m22s kube-proxy
Normal NodeAllocatableEnforced 7m35s kubelet Updated Node Allocatable limit across pods
Warning CgroupV1 7m35s kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeHasSufficientMemory 7m35s (x3 over 7m35s) kubelet Node ubuntu-20-agent-15 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m35s (x3 over 7m35s) kubelet Node ubuntu-20-agent-15 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m35s (x2 over 7m35s) kubelet Node ubuntu-20-agent-15 status is now: NodeHasSufficientPID
Normal Starting 7m35s kubelet Starting kubelet.
Normal Starting 7m30s kubelet Starting kubelet.
Warning CgroupV1 7m30s kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 7m30s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 7m30s kubelet Node ubuntu-20-agent-15 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m30s kubelet Node ubuntu-20-agent-15 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m30s kubelet Node ubuntu-20-agent-15 status is now: NodeHasSufficientPID
Normal RegisteredNode 7m25s node-controller Node ubuntu-20-agent-15 event: Registered Node ubuntu-20-agent-15 in Controller
==> dmesg <==
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 eb 2d e1 6f 64 08 06
[ +4.094712] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 7a 81 fa 1e ea 45 08 06
[ +0.026007] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 c2 2e aa 1f 86 08 06
[ +2.419586] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 8e 76 69 41 3d 08 06
[ +1.529031] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 8a 32 b0 19 72 08 06
[ +4.766061] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff aa d6 3c bd 28 cc 08 06
[ +0.198545] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 7b bb 99 2c 27 08 06
[ +0.085629] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 9e 70 2f 37 72 08 06
[ +3.221932] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 0b dc 0c bd 9c 08 06
[Dec 5 18:48] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 d2 53 31 68 c8 08 06
[ +0.027581] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 07 9b d6 30 b0 08 06
[ +9.711033] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 d7 da 78 3f 8e 08 06
[ +0.000509] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 96 a6 3b d0 14 e9 08 06
==> etcd [26cdc8676d8c] <==
{"level":"info","ts":"2024-12-05T18:46:39.451936Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"10.128.15.240:2380"}
{"level":"info","ts":"2024-12-05T18:46:39.452152Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"13f0e7e2a3d8cc98","initial-advertise-peer-urls":["https://10.128.15.240:2380"],"listen-peer-urls":["https://10.128.15.240:2380"],"advertise-client-urls":["https://10.128.15.240:2379"],"listen-client-urls":["https://10.128.15.240:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2024-12-05T18:46:39.452185Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-12-05T18:46:40.338533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 is starting a new election at term 1"}
{"level":"info","ts":"2024-12-05T18:46:40.338584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 became pre-candidate at term 1"}
{"level":"info","ts":"2024-12-05T18:46:40.338625Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 received MsgPreVoteResp from 13f0e7e2a3d8cc98 at term 1"}
{"level":"info","ts":"2024-12-05T18:46:40.338641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 became candidate at term 2"}
{"level":"info","ts":"2024-12-05T18:46:40.338647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 received MsgVoteResp from 13f0e7e2a3d8cc98 at term 2"}
{"level":"info","ts":"2024-12-05T18:46:40.338656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 became leader at term 2"}
{"level":"info","ts":"2024-12-05T18:46:40.338663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 13f0e7e2a3d8cc98 elected leader 13f0e7e2a3d8cc98 at term 2"}
{"level":"info","ts":"2024-12-05T18:46:40.339755Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"13f0e7e2a3d8cc98","local-member-attributes":"{Name:ubuntu-20-agent-15 ClientURLs:[https://10.128.15.240:2379]}","request-path":"/0/members/13f0e7e2a3d8cc98/attributes","cluster-id":"3112ce273fbe8262","publish-timeout":"7s"}
{"level":"info","ts":"2024-12-05T18:46:40.339757Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-12-05T18:46:40.339798Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-12-05T18:46:40.339793Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-12-05T18:46:40.339956Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-12-05T18:46:40.339983Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-12-05T18:46:40.340514Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3112ce273fbe8262","local-member-id":"13f0e7e2a3d8cc98","cluster-version":"3.5"}
{"level":"info","ts":"2024-12-05T18:46:40.340589Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-12-05T18:46:40.340623Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-12-05T18:46:40.340875Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-12-05T18:46:40.341044Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-12-05T18:46:40.341808Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.128.15.240:2379"}
{"level":"info","ts":"2024-12-05T18:46:40.341826Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-12-05T18:46:57.391036Z","caller":"traceutil/trace.go:171","msg":"trace[446777825] transaction","detail":"{read_only:false; response_revision:851; number_of_response:1; }","duration":"133.291291ms","start":"2024-12-05T18:46:57.257719Z","end":"2024-12-05T18:46:57.391010Z","steps":["trace[446777825] 'process raft request' (duration: 88.877653ms)","trace[446777825] 'compare' (duration: 44.152853ms)"],"step_count":2}
{"level":"info","ts":"2024-12-05T18:46:57.700377Z","caller":"traceutil/trace.go:171","msg":"trace[1301265378] transaction","detail":"{read_only:false; response_revision:857; number_of_response:1; }","duration":"100.432509ms","start":"2024-12-05T18:46:57.599912Z","end":"2024-12-05T18:46:57.700344Z","steps":["trace[1301265378] 'process raft request' (duration: 50.309812ms)","trace[1301265378] 'compare' (duration: 49.976695ms)"],"step_count":2}
==> gcp-auth [99954234e5be] <==
2024/12/05 18:48:11 GCP Auth Webhook started!
==> kernel <==
18:54:13 up 1:36, 0 users, load average: 0.07, 0.62, 1.35
Linux ubuntu-20-agent-15 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.6 LTS"
==> kube-apiserver [6201519c962c] <==
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
E1205 18:47:07.564392 1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.101.228:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.101.228:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.101.228:443: connect: connection refused" logger="UnhandledError"
I1205 18:47:07.600873 1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
W1205 18:47:11.575978 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.57.204:443: connect: connection refused
E1205 18:47:11.576028 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.57.204:443: connect: connection refused" logger="UnhandledError"
W1205 18:47:11.577730 1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.106.247.101:443: connect: connection refused
W1205 18:47:11.591561 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.57.204:443: connect: connection refused
E1205 18:47:11.591603 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.57.204:443: connect: connection refused" logger="UnhandledError"
W1205 18:47:11.593438 1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.106.247.101:443: connect: connection refused
W1205 18:47:17.107602 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.57.204:443: connect: connection refused
E1205 18:47:17.107653 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.57.204:443: connect: connection refused" logger="UnhandledError"
W1205 18:47:17.110309 1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.106.247.101:443: connect: connection refused
W1205 18:47:27.585832 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.57.204:443: connect: connection refused
E1205 18:47:27.585872 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.57.204:443: connect: connection refused" logger="UnhandledError"
W1205 18:47:27.587632 1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.106.247.101:443: connect: connection refused
W1205 18:47:27.599415 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.57.204:443: connect: connection refused
E1205 18:47:27.599450 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.57.204:443: connect: connection refused" logger="UnhandledError"
W1205 18:47:27.601115 1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.106.247.101:443: connect: connection refused
W1205 18:47:37.596163 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.57.204:443: connect: connection refused
E1205 18:47:37.596204 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.57.204:443: connect: connection refused" logger="UnhandledError"
W1205 18:47:59.596880 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.57.204:443: connect: connection refused
E1205 18:47:59.596955 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.57.204:443: connect: connection refused" logger="UnhandledError"
W1205 18:47:59.607647 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.57.204:443: connect: connection refused
E1205 18:47:59.607693 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.57.204:443: connect: connection refused" logger="UnhandledError"
==> kube-controller-manager [0b56e4852737] <==
I1205 18:48:01.487469 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
I1205 18:48:02.659941 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
I1205 18:48:02.670266 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
I1205 18:48:03.666408 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
I1205 18:48:03.673223 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
I1205 18:48:03.676826 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
I1205 18:48:03.678592 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
I1205 18:48:03.684199 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
I1205 18:48:03.689366 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
I1205 18:48:07.803227 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-6c9778cbdf" duration="77.134µs"
I1205 18:48:11.676623 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-c684cb797" duration="7.494387ms"
I1205 18:48:11.676955 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-c684cb797" duration="101.368µs"
I1205 18:48:15.352050 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-15"
I1205 18:48:22.801022 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-6c9778cbdf" duration="78.72µs"
I1205 18:48:33.014786 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
I1205 18:48:33.016586 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
I1205 18:48:33.040474 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
I1205 18:48:33.041705 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
I1205 18:48:53.802710 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-6c9778cbdf" duration="72.863µs"
I1205 18:49:04.800458 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-6c9778cbdf" duration="81.156µs"
I1205 18:50:20.802182 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-6c9778cbdf" duration="89.496µs"
I1205 18:50:32.799262 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-6c9778cbdf" duration="64.743µs"
I1205 18:53:04.800652 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-6c9778cbdf" duration="133.078µs"
I1205 18:53:18.798799 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-6c9778cbdf" duration="104.187µs"
I1205 18:53:21.852809 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-15"
==> kube-proxy [91ca714db528] <==
I1205 18:46:50.764217 1 server_linux.go:66] "Using iptables proxy"
I1205 18:46:51.029933 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.128.15.240"]
E1205 18:46:51.030011 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1205 18:46:51.085295 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1205 18:46:51.085462 1 server_linux.go:169] "Using iptables Proxier"
I1205 18:46:51.098201 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1205 18:46:51.098652 1 server.go:483] "Version info" version="v1.31.2"
I1205 18:46:51.098681 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1205 18:46:51.101546 1 config.go:199] "Starting service config controller"
I1205 18:46:51.101578 1 shared_informer.go:313] Waiting for caches to sync for service config
I1205 18:46:51.101621 1 config.go:105] "Starting endpoint slice config controller"
I1205 18:46:51.101629 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I1205 18:46:51.102260 1 config.go:328] "Starting node config controller"
I1205 18:46:51.102277 1 shared_informer.go:313] Waiting for caches to sync for node config
I1205 18:46:51.205823 1 shared_informer.go:320] Caches are synced for node config
I1205 18:46:51.205884 1 shared_informer.go:320] Caches are synced for service config
I1205 18:46:51.205915 1 shared_informer.go:320] Caches are synced for endpoint slice config
==> kube-scheduler [4903d046814c] <==
W1205 18:46:41.222859 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W1205 18:46:41.222861 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1205 18:46:41.222884 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W1205 18:46:41.222901 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E1205 18:46:41.222887 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
E1205 18:46:41.222920 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W1205 18:46:41.222903 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1205 18:46:41.222971 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W1205 18:46:41.223111 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W1205 18:46:41.223131 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E1205 18:46:41.223133 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
E1205 18:46:41.223148 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W1205 18:46:42.044450 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1205 18:46:42.044512 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W1205 18:46:42.077453 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1205 18:46:42.077502 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W1205 18:46:42.091145 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E1205 18:46:42.091188 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W1205 18:46:42.148431 1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1205 18:46:42.148473 1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W1205 18:46:42.229431 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1205 18:46:42.229478 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W1205 18:46:42.404340 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E1205 18:46:42.404394 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
I1205 18:46:44.720967 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
-- Logs begin at Wed 2024-10-16 18:17:53 UTC, end at Thu 2024-12-05 18:54:13 UTC. --
Dec 05 18:49:53 ubuntu-20-agent-15 kubelet[394162]: E1205 18:49:53.792545 394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
Dec 05 18:50:06 ubuntu-20-agent-15 kubelet[394162]: E1205 18:50:06.264473 394162 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
Dec 05 18:50:06 ubuntu-20-agent-15 kubelet[394162]: E1205 18:50:06.264539 394162 kuberuntime_image.go:55] "Failed to pull image" err="toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
Dec 05 18:50:06 ubuntu-20-agent-15 kubelet[394162]: E1205 18:50:06.264659 394162 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:volcano-scheduler,Image:docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882,Command:[],Args:[--logtostderr --scheduler-conf=/volcano.scheduler/volcano-scheduler.conf --enable-healthz=true --enable-metrics=true --leader-elect=false -v=3 2>&1],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DEBUG_SOCKET_DIR,Value:/tmp/klog-socks,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scheduler-config,ReadOnly:false,MountPath:/volcano.scheduler,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:klog-sock,ReadOnly:false,MountPath:/tmp/klog-socks,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-a
pi-access-4bz59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod volcano-scheduler-6c9778cbdf-q7mcw_volcano-system(33f5e98f-fb04-4f70-b72c-d223e4812765): ErrImagePull: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
Dec 05 18:50:06 ubuntu-20-agent-15 kubelet[394162]: E1205 18:50:06.265892 394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ErrImagePull: \"toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
Dec 05 18:50:20 ubuntu-20-agent-15 kubelet[394162]: E1205 18:50:20.792295 394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
Dec 05 18:50:32 ubuntu-20-agent-15 kubelet[394162]: E1205 18:50:32.791145 394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
Dec 05 18:50:44 ubuntu-20-agent-15 kubelet[394162]: E1205 18:50:44.791249 394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
Dec 05 18:50:59 ubuntu-20-agent-15 kubelet[394162]: E1205 18:50:59.790854 394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
Dec 05 18:51:14 ubuntu-20-agent-15 kubelet[394162]: E1205 18:51:14.791751 394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
Dec 05 18:51:29 ubuntu-20-agent-15 kubelet[394162]: E1205 18:51:29.791295 394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
Dec 05 18:51:44 ubuntu-20-agent-15 kubelet[394162]: E1205 18:51:44.791260 394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
Dec 05 18:51:55 ubuntu-20-agent-15 kubelet[394162]: E1205 18:51:55.791543 394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
Dec 05 18:52:09 ubuntu-20-agent-15 kubelet[394162]: E1205 18:52:09.791587 394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
Dec 05 18:52:21 ubuntu-20-agent-15 kubelet[394162]: E1205 18:52:21.791765 394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
Dec 05 18:52:36 ubuntu-20-agent-15 kubelet[394162]: E1205 18:52:36.791000 394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
Dec 05 18:52:51 ubuntu-20-agent-15 kubelet[394162]: E1205 18:52:51.242173 394162 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
Dec 05 18:52:51 ubuntu-20-agent-15 kubelet[394162]: E1205 18:52:51.242236 394162 kuberuntime_image.go:55] "Failed to pull image" err="toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
Dec 05 18:52:51 ubuntu-20-agent-15 kubelet[394162]: E1205 18:52:51.242384 394162 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:volcano-scheduler,Image:docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882,Command:[],Args:[--logtostderr --scheduler-conf=/volcano.scheduler/volcano-scheduler.conf --enable-healthz=true --enable-metrics=true --leader-elect=false -v=3 2>&1],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DEBUG_SOCKET_DIR,Value:/tmp/klog-socks,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scheduler-config,ReadOnly:false,MountPath:/volcano.scheduler,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:klog-sock,ReadOnly:false,MountPath:/tmp/klog-socks,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-a
pi-access-4bz59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod volcano-scheduler-6c9778cbdf-q7mcw_volcano-system(33f5e98f-fb04-4f70-b72c-d223e4812765): ErrImagePull: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
Dec 05 18:52:51 ubuntu-20-agent-15 kubelet[394162]: E1205 18:52:51.243602 394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ErrImagePull: \"toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
Dec 05 18:53:04 ubuntu-20-agent-15 kubelet[394162]: E1205 18:53:04.792141 394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
Dec 05 18:53:18 ubuntu-20-agent-15 kubelet[394162]: E1205 18:53:18.791311 394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
Dec 05 18:53:32 ubuntu-20-agent-15 kubelet[394162]: E1205 18:53:32.791588 394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
Dec 05 18:53:45 ubuntu-20-agent-15 kubelet[394162]: E1205 18:53:45.794611 394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
Dec 05 18:54:00 ubuntu-20-agent-15 kubelet[394162]: E1205 18:54:00.791353 394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
==> storage-provisioner [22de0a82d160] <==
I1205 18:46:51.457275 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1205 18:46:51.470453 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1205 18:46:51.470538 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1205 18:46:51.478719 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1205 18:46:51.478966 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-15_2d25f92c-bf4b-417f-8537-28fee34ab274!
I1205 18:46:51.480823 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"90706887-4296-427b-b150-294488763ac5", APIVersion:"v1", ResourceVersion:"578", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-15_2d25f92c-bf4b-417f-8537-28fee34ab274 became leader
I1205 18:46:51.580193 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-15_2d25f92c-bf4b-417f-8537-28fee34ab274!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: volcano-admission-init-qp6tk volcano-scheduler-6c9778cbdf-q7mcw
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context minikube describe pod volcano-admission-init-qp6tk volcano-scheduler-6c9778cbdf-q7mcw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context minikube describe pod volcano-admission-init-qp6tk volcano-scheduler-6c9778cbdf-q7mcw: exit status 1 (63.386828ms)
** stderr **
Error from server (NotFound): pods "volcano-admission-init-qp6tk" not found
Error from server (NotFound): pods "volcano-scheduler-6c9778cbdf-q7mcw" not found
** /stderr **
helpers_test.go:279: kubectl --context minikube describe pod volcano-admission-init-qp6tk volcano-scheduler-6c9778cbdf-q7mcw: exit status 1
addons_test.go:992: (dbg) Run: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.707642605s)
--- FAIL: TestAddons/serial/Volcano (372.88s)