=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-460513 --alsologtostderr -v=1]
E1002 20:10:32.271926 882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:11:55.344691 882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-460513 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-460513 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-460513 --alsologtostderr -v=1] stderr:
I1002 20:08:20.604311 926552 out.go:360] Setting OutFile to fd 1 ...
I1002 20:08:20.605725 926552 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:08:20.605754 926552 out.go:374] Setting ErrFile to fd 2...
I1002 20:08:20.605771 926552 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:08:20.606095 926552 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-881023/.minikube/bin
I1002 20:08:20.606408 926552 mustload.go:65] Loading cluster: functional-460513
I1002 20:08:20.606822 926552 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:08:20.607402 926552 cli_runner.go:164] Run: docker container inspect functional-460513 --format={{.State.Status}}
I1002 20:08:20.631506 926552 host.go:66] Checking if "functional-460513" exists ...
I1002 20:08:20.631929 926552 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 20:08:20.691701 926552 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 20:08:20.678703839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1002 20:08:20.691965 926552 api_server.go:166] Checking apiserver status ...
I1002 20:08:20.692080 926552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1002 20:08:20.692124 926552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
I1002 20:08:20.709797 926552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
I1002 20:08:20.814392 926552 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/8243/cgroup
I1002 20:08:20.822668 926552 api_server.go:182] apiserver freezer: "7:freezer:/docker/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e/kubepods/burstable/podb7d4b2f81362e26fd96513505b6d8dc0/db9b1101b76ebf9d644569f9577bc46d29730b1552ff6f03e52c9553fecf7545"
I1002 20:08:20.822757 926552 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e/kubepods/burstable/podb7d4b2f81362e26fd96513505b6d8dc0/db9b1101b76ebf9d644569f9577bc46d29730b1552ff6f03e52c9553fecf7545/freezer.state
I1002 20:08:20.830513 926552 api_server.go:204] freezer state: "THAWED"
I1002 20:08:20.830544 926552 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1002 20:08:20.840434 926552 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1002 20:08:20.840484 926552 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1002 20:08:20.840674 926552 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:08:20.840689 926552 addons.go:69] Setting dashboard=true in profile "functional-460513"
I1002 20:08:20.840696 926552 addons.go:238] Setting addon dashboard=true in "functional-460513"
I1002 20:08:20.840723 926552 host.go:66] Checking if "functional-460513" exists ...
I1002 20:08:20.841129 926552 cli_runner.go:164] Run: docker container inspect functional-460513 --format={{.State.Status}}
I1002 20:08:20.867336 926552 out.go:179] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1002 20:08:20.870338 926552 out.go:179] - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1002 20:08:20.873149 926552 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1002 20:08:20.873180 926552 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1002 20:08:20.873341 926552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
I1002 20:08:20.896862 926552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
I1002 20:08:20.999687 926552 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1002 20:08:20.999717 926552 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1002 20:08:21.015057 926552 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1002 20:08:21.015091 926552 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1002 20:08:21.029916 926552 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1002 20:08:21.029942 926552 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1002 20:08:21.044319 926552 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1002 20:08:21.044345 926552 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1002 20:08:21.058527 926552 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I1002 20:08:21.058588 926552 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1002 20:08:21.072713 926552 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1002 20:08:21.072733 926552 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1002 20:08:21.087454 926552 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1002 20:08:21.087504 926552 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1002 20:08:21.101859 926552 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1002 20:08:21.101909 926552 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1002 20:08:21.115930 926552 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1002 20:08:21.115983 926552 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1002 20:08:21.128954 926552 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1002 20:08:21.955250 926552 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-460513 addons enable metrics-server
I1002 20:08:21.958269 926552 addons.go:201] Writing out "functional-460513" config to set dashboard=true...
W1002 20:08:21.958578 926552 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1002 20:08:21.959275 926552 kapi.go:59] client config for functional-460513: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.key", CAFile:"/home/jenkins/minikube-integration/21683-881023/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1002 20:08:21.959838 926552 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1002 20:08:21.959861 926552 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1002 20:08:21.959867 926552 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1002 20:08:21.959876 926552 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1002 20:08:21.959885 926552 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1002 20:08:21.976983 926552 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard 7c183d51-45cd-4124-8c18-547dd1781a7c 1569 0 2025-10-02 20:08:21 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-02 20:08:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.111.21.165,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.111.21.165],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1002 20:08:21.977158 926552 out.go:285] * Launching proxy ...
* Launching proxy ...
I1002 20:08:21.977291 926552 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-460513 proxy --port 36195]
I1002 20:08:21.977563 926552 dashboard.go:157] Waiting for kubectl to output host:port ...
I1002 20:08:22.051323 926552 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W1002 20:08:22.051374 926552 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1002 20:08:22.071645 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[077ceb23-637a-4253-b029-23d08100a88a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x40000ff180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400043f7c0 TLS:<nil>}
I1002 20:08:22.071730 926552 retry.go:31] will retry after 128.346µs: Temporary Error: unexpected response code: 503
I1002 20:08:22.081026 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9bbb3dd0-3096-4c4c-9c7d-49663a6cfc76] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x40007cdd40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027c3c0 TLS:<nil>}
I1002 20:08:22.081111 926552 retry.go:31] will retry after 117.352µs: Temporary Error: unexpected response code: 503
I1002 20:08:22.085423 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[00eb5d8d-a315-480d-a792-ee0f2ce06bbb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x40000ff280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400043f900 TLS:<nil>}
I1002 20:08:22.085509 926552 retry.go:31] will retry after 289.683µs: Temporary Error: unexpected response code: 503
I1002 20:08:22.090365 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f0e0cf53-fd88-42f9-817d-e320027a1a47] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x40000ff340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027c500 TLS:<nil>}
I1002 20:08:22.090433 926552 retry.go:31] will retry after 183.876µs: Temporary Error: unexpected response code: 503
I1002 20:08:22.094622 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[87ba962e-7d7e-4442-aef1-3c6ac86b3fee] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400043fa40 TLS:<nil>}
I1002 20:08:22.094687 926552 retry.go:31] will retry after 458.6µs: Temporary Error: unexpected response code: 503
I1002 20:08:22.098845 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a8a300fc-0921-4a6b-af63-b7e7667de2a5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027c640 TLS:<nil>}
I1002 20:08:22.098916 926552 retry.go:31] will retry after 582.735µs: Temporary Error: unexpected response code: 503
I1002 20:08:22.103834 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[207ee12f-93d4-4076-b93b-08b41b593c03] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027c8c0 TLS:<nil>}
I1002 20:08:22.103900 926552 retry.go:31] will retry after 591.949µs: Temporary Error: unexpected response code: 503
I1002 20:08:22.109020 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ea69ed95-b1ab-4751-bd37-05a1e90e36e3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027ca00 TLS:<nil>}
I1002 20:08:22.109087 926552 retry.go:31] will retry after 1.533459ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.114295 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6fcbe727-3faa-4b64-b181-58b042c28b2e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x40006824c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027cdc0 TLS:<nil>}
I1002 20:08:22.114358 926552 retry.go:31] will retry after 2.329666ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.120671 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d998dbe4-6377-4b2e-8c21-62477a2e17e3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x40000ff5c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400043fb80 TLS:<nil>}
I1002 20:08:22.120732 926552 retry.go:31] will retry after 4.975359ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.129598 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[16aa3b6d-f637-4238-9492-568845b64a4a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027cf00 TLS:<nil>}
I1002 20:08:22.129660 926552 retry.go:31] will retry after 8.169307ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.144552 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bf507169-6724-4740-a7f1-6a4ec33e67e5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400043fe00 TLS:<nil>}
I1002 20:08:22.144625 926552 retry.go:31] will retry after 4.980271ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.152855 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5d2cbe40-4ca5-4418-9b2b-18b9034ab24b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x40000ff700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048e000 TLS:<nil>}
I1002 20:08:22.152943 926552 retry.go:31] will retry after 11.859304ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.169476 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b5c455f0-0b58-48d9-906d-56c4be539dcf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x40000ff780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048e140 TLS:<nil>}
I1002 20:08:22.169572 926552 retry.go:31] will retry after 26.885539ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.200253 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[629878bd-74ae-4197-983d-74d4becf3920] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027d040 TLS:<nil>}
I1002 20:08:22.200318 926552 retry.go:31] will retry after 36.998487ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.240591 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b5d95e4f-aa6f-408e-b50c-c2a6dace7117] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048e280 TLS:<nil>}
I1002 20:08:22.240656 926552 retry.go:31] will retry after 43.877252ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.288159 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5048d1be-8b99-41c7-a1e5-921448660e89] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682b00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027d180 TLS:<nil>}
I1002 20:08:22.288233 926552 retry.go:31] will retry after 90.049932ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.381864 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[722924c9-c336-4ae6-a94f-9b9333441342] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027d2c0 TLS:<nil>}
I1002 20:08:22.381928 926552 retry.go:31] will retry after 61.001595ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.446377 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b68c7883-0c3e-43c9-83ff-d60264fc7c74] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027de00 TLS:<nil>}
I1002 20:08:22.446447 926552 retry.go:31] will retry after 186.248647ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.637222 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[84a9c669-6be2-460f-8dc5-e877d3c2e1d7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x40000ffa80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002d0000 TLS:<nil>}
I1002 20:08:22.637285 926552 retry.go:31] will retry after 115.077398ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.755874 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[166d3029-1cc1-46f5-8734-454cc4d08f32] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048e3c0 TLS:<nil>}
I1002 20:08:22.755946 926552 retry.go:31] will retry after 398.772602ms: Temporary Error: unexpected response code: 503
I1002 20:08:23.158681 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[af00d3a1-018e-4989-8631-8fbe09c87663] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:23 GMT]] Body:0x4000682e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048e500 TLS:<nil>}
I1002 20:08:23.158760 926552 retry.go:31] will retry after 487.496813ms: Temporary Error: unexpected response code: 503
I1002 20:08:23.649552 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f1ba6c34-f146-4f0d-9c40-8d63a568ef7c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:23 GMT]] Body:0x40000ffb40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048e640 TLS:<nil>}
I1002 20:08:23.649665 926552 retry.go:31] will retry after 526.696114ms: Temporary Error: unexpected response code: 503
I1002 20:08:24.180626 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[268e3920-8d7d-412a-856d-f126be6f28ea] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:24 GMT]] Body:0x4000682f80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002d0140 TLS:<nil>}
I1002 20:08:24.180702 926552 retry.go:31] will retry after 1.32686554s: Temporary Error: unexpected response code: 503
I1002 20:08:25.510752 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ddd8f53c-c71d-46ed-b180-fbfa5318f174] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:25 GMT]] Body:0x4000683080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048e780 TLS:<nil>}
I1002 20:08:25.510817 926552 retry.go:31] will retry after 2.119624721s: Temporary Error: unexpected response code: 503
I1002 20:08:27.634116 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2fbc3ef2-accf-4da1-93a8-fd20c32e7ddb] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:27 GMT]] Body:0x40000ffc80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048e8c0 TLS:<nil>}
I1002 20:08:27.634201 926552 retry.go:31] will retry after 1.836507428s: Temporary Error: unexpected response code: 503
I1002 20:08:29.474137 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e0f3f044-ccfa-4a83-b27c-5d903d8f73c2] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:29 GMT]] Body:0x4000683200 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002d0280 TLS:<nil>}
I1002 20:08:29.474200 926552 retry.go:31] will retry after 3.586235995s: Temporary Error: unexpected response code: 503
I1002 20:08:33.064756 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0a0a21c4-0bbf-4310-a1ad-a3793fa3e223] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:33 GMT]] Body:0x4000683b40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048ea00 TLS:<nil>}
I1002 20:08:33.064824 926552 retry.go:31] will retry after 6.420180889s: Temporary Error: unexpected response code: 503
I1002 20:08:39.488621 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3b32c4ae-9186-40a1-afd4-695c1e7d66ae] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:39 GMT]] Body:0x4000683c00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048eb40 TLS:<nil>}
I1002 20:08:39.488740 926552 retry.go:31] will retry after 8.676547042s: Temporary Error: unexpected response code: 503
I1002 20:08:48.170714 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8338a081-a59f-4c50-bcfc-4a24ff9d1bbd] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:48 GMT]] Body:0x4000683cc0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048ec80 TLS:<nil>}
I1002 20:08:48.170778 926552 retry.go:31] will retry after 7.855718026s: Temporary Error: unexpected response code: 503
I1002 20:08:56.030096 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9bb3717f-bc64-413e-ae29-855abd3d3ca1] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:56 GMT]] Body:0x4000683f40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048edc0 TLS:<nil>}
I1002 20:08:56.030205 926552 retry.go:31] will retry after 26.711375454s: Temporary Error: unexpected response code: 503
I1002 20:09:22.745175 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b0159567-4d3d-49f9-a5ec-056af44a7d6c] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:09:22 GMT]] Body:0x40000ffec0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002d03c0 TLS:<nil>}
I1002 20:09:22.745260 926552 retry.go:31] will retry after 24.540176246s: Temporary Error: unexpected response code: 503
I1002 20:09:47.289738 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8472e881-f27a-4483-b831-7d65eb6787e3] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:09:47 GMT]] Body:0x4000824140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048ef00 TLS:<nil>}
I1002 20:09:47.289800 926552 retry.go:31] will retry after 37.029199269s: Temporary Error: unexpected response code: 503
I1002 20:10:24.323091 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f8ecd2ec-3759-4e9f-901e-e9346bd9db85] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:10:24 GMT]] Body:0x40000fe0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048f040 TLS:<nil>}
I1002 20:10:24.323160 926552 retry.go:31] will retry after 1m22.480425161s: Temporary Error: unexpected response code: 503
I1002 20:11:46.806737 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7d691c2e-4557-40ca-b82e-e2f192fe0e10] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:11:46 GMT]] Body:0x40000fea80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002d0500 TLS:<nil>}
I1002 20:11:46.806804 926552 retry.go:31] will retry after 1m13.835791241s: Temporary Error: unexpected response code: 503
I1002 20:13:00.646383 926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[73d4d864-2c33-4650-9ab6-d67ff34f1fea] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:13:00 GMT]] Body:0x4000824240 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048f180 TLS:<nil>}
I1002 20:13:00.646457 926552 retry.go:31] will retry after 36.335242176s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect functional-460513
helpers_test.go:243: (dbg) docker inspect functional-460513:
-- stdout --
[
{
"Id": "b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e",
"Created": "2025-10-02T19:54:34.194287273Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 908898,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-10-02T19:54:34.236525194Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
"ResolvConfPath": "/var/lib/docker/containers/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e/hostname",
"HostsPath": "/var/lib/docker/containers/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e/hosts",
"LogPath": "/var/lib/docker/containers/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e-json.log",
"Name": "/functional-460513",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-460513:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-460513",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e",
"LowerDir": "/var/lib/docker/overlay2/cb2c9b449a3d89e392b79bf1325d8b59cc262f54a697e258214e3f921a516b36-init/diff:/var/lib/docker/overlay2/4168a6b35c0191bd222903a9b469ebe18ea5b9d5b6daa344f4a494c07b59f9f7/diff",
"MergedDir": "/var/lib/docker/overlay2/cb2c9b449a3d89e392b79bf1325d8b59cc262f54a697e258214e3f921a516b36/merged",
"UpperDir": "/var/lib/docker/overlay2/cb2c9b449a3d89e392b79bf1325d8b59cc262f54a697e258214e3f921a516b36/diff",
"WorkDir": "/var/lib/docker/overlay2/cb2c9b449a3d89e392b79bf1325d8b59cc262f54a697e258214e3f921a516b36/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-460513",
"Source": "/var/lib/docker/volumes/functional-460513/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-460513",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-460513",
"name.minikube.sigs.k8s.io": "functional-460513",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "bee011508c270ebc2e408f73210ac3ca6232133e06ba77fc00469a23ae840d07",
"SandboxKey": "/var/run/docker/netns/bee011508c27",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33896"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33897"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33900"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33898"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33899"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-460513": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "d6:74:65:19:66:d7",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "46436a08b18539b6074e0247d0c1aef98e52bada9514c01c857330a2e439d034",
"EndpointID": "dc8104321418323876b9e2a21a7a9e8d25ae8fe4b72705ceac33234352c25405",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-460513",
"b8078c0512be"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-460513 -n functional-460513
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-arm64 -p functional-460513 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-460513 logs -n 25: (1.226648138s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ image │ functional-460513 image save kicbase/echo-server:functional-460513 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ image │ functional-460513 image rm kicbase/echo-server:functional-460513 --alsologtostderr │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ image │ functional-460513 image ls │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ image │ functional-460513 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ image │ functional-460513 image ls │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ image │ functional-460513 image save --daemon kicbase/echo-server:functional-460513 --alsologtostderr │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ docker-env │ functional-460513 docker-env │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ docker-env │ functional-460513 docker-env │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ ssh │ functional-460513 ssh sudo cat /etc/test/nested/copy/882884/hosts │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ ssh │ functional-460513 ssh sudo cat /etc/ssl/certs/882884.pem │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ ssh │ functional-460513 ssh sudo cat /usr/share/ca-certificates/882884.pem │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ ssh │ functional-460513 ssh sudo cat /etc/ssl/certs/51391683.0 │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ ssh │ functional-460513 ssh sudo cat /etc/ssl/certs/8828842.pem │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ ssh │ functional-460513 ssh sudo cat /usr/share/ca-certificates/8828842.pem │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ ssh │ functional-460513 ssh sudo cat /etc/ssl/certs/3ec20f2e.0 │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ image │ functional-460513 image ls --format short --alsologtostderr │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ image │ functional-460513 image ls --format yaml --alsologtostderr │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ ssh │ functional-460513 ssh pgrep buildkitd │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ │
│ image │ functional-460513 image build -t localhost/my-image:functional-460513 testdata/build --alsologtostderr │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ image │ functional-460513 image ls │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ image │ functional-460513 image ls --format json --alsologtostderr │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ image │ functional-460513 image ls --format table --alsologtostderr │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ update-context │ functional-460513 update-context --alsologtostderr -v=2 │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ update-context │ functional-460513 update-context --alsologtostderr -v=2 │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
│ update-context │ functional-460513 update-context --alsologtostderr -v=2 │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/10/02 20:08:20
Running on machine: ip-172-31-24-2
Binary: Built with gc go1.24.6 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1002 20:08:20.364656 926478 out.go:360] Setting OutFile to fd 1 ...
I1002 20:08:20.364845 926478 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:08:20.364857 926478 out.go:374] Setting ErrFile to fd 2...
I1002 20:08:20.364862 926478 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:08:20.365129 926478 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-881023/.minikube/bin
I1002 20:08:20.365537 926478 out.go:368] Setting JSON to false
I1002 20:08:20.366586 926478 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":17439,"bootTime":1759418262,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I1002 20:08:20.366653 926478 start.go:140] virtualization:
I1002 20:08:20.369783 926478 out.go:179] * [functional-460513] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1002 20:08:20.373648 926478 out.go:179] - MINIKUBE_LOCATION=21683
I1002 20:08:20.373796 926478 notify.go:221] Checking for updates...
I1002 20:08:20.379533 926478 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1002 20:08:20.382459 926478 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21683-881023/kubeconfig
I1002 20:08:20.390635 926478 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-881023/.minikube
I1002 20:08:20.394015 926478 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1002 20:08:20.396945 926478 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1002 20:08:20.400262 926478 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:08:20.400826 926478 driver.go:422] Setting default libvirt URI to qemu:///system
I1002 20:08:20.430909 926478 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1002 20:08:20.431072 926478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 20:08:20.490993 926478 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 20:08:20.481776437 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1002 20:08:20.491100 926478 docker.go:319] overlay module found
I1002 20:08:20.494403 926478 out.go:179] * Using the docker driver based on existing profile
I1002 20:08:20.497299 926478 start.go:306] selected driver: docker
I1002 20:08:20.497320 926478 start.go:936] validating driver "docker" against &{Name:functional-460513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-460513 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1002 20:08:20.497428 926478 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1002 20:08:20.497529 926478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 20:08:20.550819 926478 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 20:08:20.541610835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1002 20:08:20.551283 926478 cni.go:84] Creating CNI manager for ""
I1002 20:08:20.551358 926478 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1002 20:08:20.551416 926478 start.go:350] cluster config:
{Name:functional-460513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-460513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1002 20:08:20.554663 926478 out.go:179] * dry-run validation complete!
==> Docker <==
Oct 02 20:08:22 functional-460513 cri-dockerd[7470]: time="2025-10-02T20:08:22Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
Oct 02 20:08:22 functional-460513 dockerd[6691]: time="2025-10-02T20:08:22.732627175Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Oct 02 20:08:22 functional-460513 dockerd[6691]: time="2025-10-02T20:08:22.819240810Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 02 20:08:38 functional-460513 dockerd[6691]: time="2025-10-02T20:08:38.155639221Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Oct 02 20:08:38 functional-460513 dockerd[6691]: time="2025-10-02T20:08:38.244541496Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 02 20:08:38 functional-460513 dockerd[6691]: time="2025-10-02T20:08:38.294304625Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Oct 02 20:08:38 functional-460513 dockerd[6691]: time="2025-10-02T20:08:38.387000951Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 02 20:08:47 functional-460513 dockerd[6691]: time="2025-10-02T20:08:47.320794251Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 02 20:08:53 functional-460513 dockerd[6691]: time="2025-10-02T20:08:53.315418324Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 02 20:09:05 functional-460513 dockerd[6691]: time="2025-10-02T20:09:05.149055644Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Oct 02 20:09:05 functional-460513 dockerd[6691]: time="2025-10-02T20:09:05.245384626Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 02 20:09:08 functional-460513 dockerd[6691]: time="2025-10-02T20:09:08.152059959Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Oct 02 20:09:08 functional-460513 dockerd[6691]: time="2025-10-02T20:09:08.237768594Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 02 20:09:51 functional-460513 dockerd[6691]: time="2025-10-02T20:09:51.149901296Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Oct 02 20:09:51 functional-460513 dockerd[6691]: time="2025-10-02T20:09:51.344987903Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 02 20:09:51 functional-460513 cri-dockerd[7470]: time="2025-10-02T20:09:51Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Pulling from kubernetesui/metrics-scraper"
Oct 02 20:09:58 functional-460513 dockerd[6691]: time="2025-10-02T20:09:58.148114294Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Oct 02 20:09:58 functional-460513 dockerd[6691]: time="2025-10-02T20:09:58.242409584Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 02 20:11:22 functional-460513 dockerd[6691]: time="2025-10-02T20:11:22.149780472Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Oct 02 20:11:22 functional-460513 dockerd[6691]: time="2025-10-02T20:11:22.236123492Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 02 20:11:25 functional-460513 dockerd[6691]: time="2025-10-02T20:11:25.139627309Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Oct 02 20:11:25 functional-460513 dockerd[6691]: time="2025-10-02T20:11:25.226663867Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 02 20:12:21 functional-460513 dockerd[6691]: 2025/10/02 20:12:21 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
Oct 02 20:12:56 functional-460513 dockerd[6691]: time="2025-10-02T20:12:56.421622660Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 02 20:12:56 functional-460513 cri-dockerd[7470]: time="2025-10-02T20:12:56Z" level=info msg="Stop pulling image kicbase/echo-server:latest: latest: Pulling from kicbase/echo-server"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
623723a43c077 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 5 minutes ago Exited mount-munger 0 d79e9c9417245 busybox-mount default
bc256734b9fe8 nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8 15 minutes ago Running nginx 0 9789396a1419a nginx-svc default
fe224927b52c5 05baa95f5142d 15 minutes ago Running kube-proxy 2 3a14beb9805ac kube-proxy-z7ghw kube-system
33f4fa437242e ba04bb24b9575 15 minutes ago Running storage-provisioner 2 ed81eec2d77f8 storage-provisioner kube-system
02510aec7c38d 138784d87c9c5 15 minutes ago Running coredns 2 afeaaf344747b coredns-66bc5c9577-bb2ds kube-system
26134bc61f5d9 a1894772a478e 15 minutes ago Running etcd 2 4cf8116593459 etcd-functional-460513 kube-system
db9b1101b76eb 43911e833d64d 15 minutes ago Running kube-apiserver 0 29b883084068a kube-apiserver-functional-460513 kube-system
0e09036d7add9 b5f57ec6b9867 15 minutes ago Running kube-scheduler 2 bfb48aab841d2 kube-scheduler-functional-460513 kube-system
5d710be832df0 7eb2c6ff0c5a7 15 minutes ago Running kube-controller-manager 2 605575f71f812 kube-controller-manager-functional-460513 kube-system
11843acc93b83 138784d87c9c5 17 minutes ago Exited coredns 1 c216e22a818d3 coredns-66bc5c9577-bb2ds kube-system
5e610b6f5c956 ba04bb24b9575 17 minutes ago Exited storage-provisioner 1 d6bece758620b storage-provisioner kube-system
8013cb97c756c 05baa95f5142d 17 minutes ago Exited kube-proxy 1 6fa6f9b610fc1 kube-proxy-z7ghw kube-system
5459180499bcd b5f57ec6b9867 17 minutes ago Exited kube-scheduler 1 14d19245da307 kube-scheduler-functional-460513 kube-system
4805d040cabcf a1894772a478e 17 minutes ago Exited etcd 1 b61a48faa350c etcd-functional-460513 kube-system
cccbefc54d3cd 7eb2c6ff0c5a7 17 minutes ago Exited kube-controller-manager 1 b7d3e4afda29d kube-controller-manager-functional-460513 kube-system
==> coredns [02510aec7c38] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.12.1
linux/arm64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:34079 - 21240 "HINFO IN 244857414700627593.4635503374353347991. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021849786s
==> coredns [11843acc93b8] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.12.1
linux/arm64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:43425 - 16476 "HINFO IN 6420058890467486523.4324477465014152588. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020251933s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: functional-460513
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=functional-460513
kubernetes.io/os=linux
minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
minikube.k8s.io/name=functional-460513
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_10_02T19_55_07_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 02 Oct 2025 19:55:04 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-460513
AcquireTime: <unset>
RenewTime: Thu, 02 Oct 2025 20:13:20 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 02 Oct 2025 20:12:30 +0000 Thu, 02 Oct 2025 19:54:59 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Oct 2025 20:12:30 +0000 Thu, 02 Oct 2025 19:54:59 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Oct 2025 20:12:30 +0000 Thu, 02 Oct 2025 19:54:59 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 02 Oct 2025 20:12:30 +0000 Thu, 02 Oct 2025 19:55:07 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-460513
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022304Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022304Ki
pods: 110
System Info:
Machine ID: 383087b4c8744483b09343609d84322f
System UUID: 5b6ef310-3cb5-4b1c-978f-45f181f323cd
Boot ID: 0abe58db-3afd-40ad-9a63-2ed98334b343
Kernel Version: 5.15.0-1084-aws
OS Image: Debian GNU/Linux 12 (bookworm)
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://28.4.0
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-75c85bcc94-s8zx4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
default hello-node-connect-7d85dfc575-85j8h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15m
default nginx-svc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15m
default sp-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15m
kube-system coredns-66bc5c9577-bb2ds 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 18m
kube-system etcd-functional-460513 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 18m
kube-system kube-apiserver-functional-460513 250m (12%) 0 (0%) 0 (0%) 0 (0%) 15m
kube-system kube-controller-manager-functional-460513 200m (10%) 0 (0%) 0 (0%) 0 (0%) 18m
kube-system kube-proxy-z7ghw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18m
kube-system kube-scheduler-functional-460513 100m (5%) 0 (0%) 0 (0%) 0 (0%) 18m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18m
kubernetes-dashboard dashboard-metrics-scraper-77bf4d6c4c-s9ptn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m
kubernetes-dashboard kubernetes-dashboard-855c9754f9-dlfsg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%) 0 (0%)
memory 170Mi (2%) 170Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 18m kube-proxy
Normal Starting 15m kube-proxy
Normal Starting 16m kube-proxy
Normal NodeHasSufficientMemory 18m (x8 over 18m) kubelet Node functional-460513 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 18m (x8 over 18m) kubelet Node functional-460513 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 18m (x7 over 18m) kubelet Node functional-460513 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 18m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientPID 18m kubelet Node functional-460513 status is now: NodeHasSufficientPID
Warning CgroupV1 18m kubelet cgroup v1 support is in maintenance mode, please migrate to cgroup v2
Normal NodeAllocatableEnforced 18m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 18m kubelet Node functional-460513 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 18m kubelet Node functional-460513 status is now: NodeHasNoDiskPressure
Normal NodeReady 18m kubelet Node functional-460513 status is now: NodeReady
Normal Starting 18m kubelet Starting kubelet.
Normal RegisteredNode 18m node-controller Node functional-460513 event: Registered Node functional-460513 in Controller
Normal RegisteredNode 16m node-controller Node functional-460513 event: Registered Node functional-460513 in Controller
Warning ContainerGCFailed 16m (x2 over 17m) kubelet rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Normal Starting 15m kubelet Starting kubelet.
Normal NodeHasNoDiskPressure 15m (x8 over 15m) kubelet Node functional-460513 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 15m (x8 over 15m) kubelet Node functional-460513 status is now: NodeHasSufficientMemory
Warning CgroupV1 15m kubelet cgroup v1 support is in maintenance mode, please migrate to cgroup v2
Normal NodeHasSufficientPID 15m (x7 over 15m) kubelet Node functional-460513 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 15m kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 15m node-controller Node functional-460513 event: Registered Node functional-460513 in Controller
==> dmesg <==
[Oct 2 18:16] kauditd_printk_skb: 8 callbacks suppressed
[Oct 2 19:46] kauditd_printk_skb: 8 callbacks suppressed
==> etcd [26134bc61f5d] <==
{"level":"warn","ts":"2025-10-02T19:57:30.255543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43808","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:57:30.266504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43822","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:57:30.329264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43844","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:57:30.337464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43850","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:57:30.364171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43874","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:57:30.390608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43878","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:57:30.429996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43902","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:57:30.451770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43924","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:57:30.474863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43944","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:57:30.502871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43960","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:57:30.549653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43976","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:57:30.571696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43992","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:57:30.604458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44014","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:57:30.632445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44036","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:57:30.711817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44066","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:57:30.749386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44074","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:57:30.773166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44094","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:57:30.809665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44114","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:57:30.895187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44136","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-10-02T20:07:29.316623Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1154}
{"level":"info","ts":"2025-10-02T20:07:29.339888Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1154,"took":"22.906662ms","hash":2845891145,"current-db-size-bytes":3248128,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1507328,"current-db-size-in-use":"1.5 MB"}
{"level":"info","ts":"2025-10-02T20:07:29.339943Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2845891145,"revision":1154,"compact-revision":-1}
{"level":"info","ts":"2025-10-02T20:12:29.323117Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1456}
{"level":"info","ts":"2025-10-02T20:12:29.326934Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1456,"took":"3.289627ms","hash":2636041803,"current-db-size-bytes":3248128,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":2297856,"current-db-size-in-use":"2.3 MB"}
{"level":"info","ts":"2025-10-02T20:12:29.326982Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2636041803,"revision":1456,"compact-revision":1154}
==> etcd [4805d040cabc] <==
{"level":"warn","ts":"2025-10-02T19:56:24.649466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56724","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:56:24.669130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56734","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:56:24.692938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56754","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:56:24.722613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56770","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:56:24.742087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56776","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:56:24.758015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56800","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T19:56:24.852099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56828","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-10-02T19:57:09.748218Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2025-10-02T19:57:09.748293Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-460513","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
{"level":"error","ts":"2025-10-02T19:57:09.748479Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-10-02T19:57:16.751072Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-10-02T19:57:16.753295Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-10-02T19:57:16.753526Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2025-10-02T19:57:16.754984Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
{"level":"info","ts":"2025-10-02T19:57:16.755181Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
{"level":"warn","ts":"2025-10-02T19:57:16.756496Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-10-02T19:57:16.756697Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"error","ts":"2025-10-02T19:57:16.756785Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"warn","ts":"2025-10-02T19:57:16.756962Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
{"level":"warn","ts":"2025-10-02T19:57:16.757051Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
{"level":"error","ts":"2025-10-02T19:57:16.757145Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-10-02T19:57:16.760135Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"error","ts":"2025-10-02T19:57:16.760313Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-10-02T19:57:16.760384Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2025-10-02T19:57:16.760510Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-460513","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
==> kernel <==
20:13:21 up 4:55, 0 user, load average: 0.27, 0.42, 1.02
Linux functional-460513 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kube-apiserver [db9b1101b76e] <==
I1002 19:57:31.921322 1 cache.go:39] Caches are synced for RemoteAvailability controller
I1002 19:57:31.921533 1 shared_informer.go:356] "Caches are synced" controller="configmaps"
I1002 19:57:31.921674 1 cache.go:39] Caches are synced for LocalAvailability controller
I1002 19:57:31.921966 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1002 19:57:31.927546 1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
I1002 19:57:31.928922 1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
I1002 19:57:31.930128 1 handler_discovery.go:451] Starting ResourceDiscoveryManager
I1002 19:57:32.120722 1 controller.go:667] quota admission added evaluator for: serviceaccounts
I1002 19:57:32.628068 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W1002 19:57:33.145257 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
I1002 19:57:33.146828 1 controller.go:667] quota admission added evaluator for: endpoints
I1002 19:57:33.160804 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1002 19:57:33.783270 1 controller.go:667] quota admission added evaluator for: deployments.apps
I1002 19:57:33.834297 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I1002 19:57:33.873574 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1002 19:57:33.886358 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1002 19:57:35.224249 1 controller.go:667] quota admission added evaluator for: replicasets.apps
I1002 19:57:47.082177 1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.248.125"}
I1002 19:57:54.116376 1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.89.238"}
I1002 19:58:02.716762 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.203.206"}
I1002 20:02:03.053370 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.214.198"}
I1002 20:07:31.811789 1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
I1002 20:08:21.578235 1 controller.go:667] quota admission added evaluator for: namespaces
I1002 20:08:21.915300 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.21.165"}
I1002 20:08:21.942619 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.196.152"}
==> kube-controller-manager [5d710be832df] <==
I1002 19:57:35.186679 1 shared_informer.go:356] "Caches are synced" controller="cronjob"
I1002 19:57:35.189823 1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
I1002 19:57:35.193902 1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
I1002 19:57:35.195049 1 shared_informer.go:356] "Caches are synced" controller="GC"
I1002 19:57:35.199815 1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
I1002 19:57:35.210359 1 shared_informer.go:356] "Caches are synced" controller="HPA"
I1002 19:57:35.210452 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
I1002 19:57:35.213251 1 shared_informer.go:356] "Caches are synced" controller="endpoint"
I1002 19:57:35.213478 1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
I1002 19:57:35.213609 1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
I1002 19:57:35.216464 1 shared_informer.go:356] "Caches are synced" controller="disruption"
I1002 19:57:35.216526 1 shared_informer.go:356] "Caches are synced" controller="TTL"
I1002 19:57:35.216569 1 shared_informer.go:356] "Caches are synced" controller="stateful set"
I1002 19:57:35.217285 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1002 19:57:35.219381 1 shared_informer.go:356] "Caches are synced" controller="expand"
I1002 19:57:35.224560 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
E1002 20:08:21.702788 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1002 20:08:21.714297 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1002 20:08:21.726643 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1002 20:08:21.727415 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1002 20:08:21.743362 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1002 20:08:21.750012 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1002 20:08:21.757003 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1002 20:08:21.762532 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1002 20:08:21.766458 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
==> kube-controller-manager [cccbefc54d3c] <==
I1002 19:56:29.302392 1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
I1002 19:56:29.307085 1 shared_informer.go:356] "Caches are synced" controller="expand"
I1002 19:56:29.310379 1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
I1002 19:56:29.319631 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1002 19:56:29.322784 1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
I1002 19:56:29.326064 1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
I1002 19:56:29.328302 1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
I1002 19:56:29.332298 1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
I1002 19:56:29.332513 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
I1002 19:56:29.332362 1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
I1002 19:56:29.332660 1 shared_informer.go:356] "Caches are synced" controller="HPA"
I1002 19:56:29.332336 1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
I1002 19:56:29.333765 1 shared_informer.go:356] "Caches are synced" controller="cronjob"
I1002 19:56:29.333841 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
I1002 19:56:29.337080 1 shared_informer.go:356] "Caches are synced" controller="TTL"
I1002 19:56:29.339430 1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
I1002 19:56:29.339806 1 shared_informer.go:356] "Caches are synced" controller="node"
I1002 19:56:29.339958 1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
I1002 19:56:29.340109 1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
I1002 19:56:29.340280 1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
I1002 19:56:29.340402 1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
I1002 19:56:29.342935 1 shared_informer.go:356] "Caches are synced" controller="attach detach"
I1002 19:56:29.345450 1 shared_informer.go:356] "Caches are synced" controller="endpoint"
I1002 19:56:29.348097 1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
I1002 19:56:29.368572 1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
==> kube-proxy [8013cb97c756] <==
I1002 19:56:26.293475 1 server_linux.go:53] "Using iptables proxy"
I1002 19:56:26.542177 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1002 19:56:26.642813 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1002 19:56:26.642867 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
E1002 19:56:26.642965 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1002 19:56:26.881417 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1002 19:56:26.881477 1 server_linux.go:132] "Using iptables Proxier"
I1002 19:56:26.953908 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1002 19:56:26.969663 1 server.go:527] "Version info" version="v1.34.1"
I1002 19:56:26.969688 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1002 19:56:26.978549 1 config.go:200] "Starting service config controller"
I1002 19:56:26.978578 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1002 19:56:27.018283 1 config.go:106] "Starting endpoint slice config controller"
I1002 19:56:27.018304 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1002 19:56:27.018327 1 config.go:403] "Starting serviceCIDR config controller"
I1002 19:56:27.018332 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1002 19:56:27.018825 1 config.go:309] "Starting node config controller"
I1002 19:56:27.018838 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1002 19:56:27.018845 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1002 19:56:27.079689 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1002 19:56:27.118988 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1002 19:56:27.119021 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-proxy [fe224927b52c] <==
I1002 19:57:33.648761 1 server_linux.go:53] "Using iptables proxy"
I1002 19:57:33.759095 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1002 19:57:33.860901 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1002 19:57:33.862375 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
E1002 19:57:33.862589 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1002 19:57:33.950063 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1002 19:57:33.953446 1 server_linux.go:132] "Using iptables Proxier"
I1002 19:57:33.978599 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1002 19:57:33.978920 1 server.go:527] "Version info" version="v1.34.1"
I1002 19:57:33.978939 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1002 19:57:33.981527 1 config.go:200] "Starting service config controller"
I1002 19:57:33.981550 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1002 19:57:33.983742 1 config.go:106] "Starting endpoint slice config controller"
I1002 19:57:33.983757 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1002 19:57:33.983780 1 config.go:403] "Starting serviceCIDR config controller"
I1002 19:57:33.983784 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1002 19:57:33.984516 1 config.go:309] "Starting node config controller"
I1002 19:57:33.984523 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1002 19:57:33.984530 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1002 19:57:34.082480 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1002 19:57:34.085275 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1002 19:57:34.085312 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [0e09036d7add] <==
I1002 19:57:31.457845 1 serving.go:386] Generated self-signed cert in-memory
I1002 19:57:33.347221 1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
I1002 19:57:33.347258 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1002 19:57:33.354010 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I1002 19:57:33.354105 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
I1002 19:57:33.354127 1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
I1002 19:57:33.356128 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I1002 19:57:33.366203 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1002 19:57:33.366227 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1002 19:57:33.366246 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I1002 19:57:33.366252 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I1002 19:57:33.454928 1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
I1002 19:57:33.467906 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I1002 19:57:33.467992 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kube-scheduler [5459180499bc] <==
I1002 19:56:24.387641 1 serving.go:386] Generated self-signed cert in-memory
I1002 19:56:26.440161 1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
I1002 19:56:26.440199 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1002 19:56:26.454187 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I1002 19:56:26.454290 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
I1002 19:56:26.455734 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I1002 19:56:26.461260 1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
I1002 19:56:26.461736 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1002 19:56:26.461761 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1002 19:56:26.461780 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I1002 19:56:26.461791 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I1002 19:56:26.562274 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I1002 19:56:26.562343 1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
I1002 19:56:26.562445 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1002 19:57:09.734861 1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
I1002 19:57:09.734884 1 server.go:263] "[graceful-termination] secure server has stopped listening"
I1002 19:57:09.734919 1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
I1002 19:57:09.734950 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1002 19:57:09.734971 1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
I1002 19:57:09.735006 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I1002 19:57:09.735269 1 server.go:265] "[graceful-termination] secure server is exiting"
E1002 19:57:09.735298 1 run.go:72] "command failed" err="finished without leader elect"
==> kubelet <==
Oct 02 20:12:26 functional-460513 kubelet[7846]: E1002 20:12:26.102902 7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
Oct 02 20:12:28 functional-460513 kubelet[7846]: E1002 20:12:28.099796 7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-s8zx4" podUID="7076f721-7fec-48cb-b884-2ff8c9abbcd2"
Oct 02 20:12:29 functional-460513 kubelet[7846]: E1002 20:12:29.101388 7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-s9ptn" podUID="af01a2e9-cd27-4c95-a09b-56995a56ee5a"
Oct 02 20:12:33 functional-460513 kubelet[7846]: E1002 20:12:33.099601 7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
Oct 02 20:12:35 functional-460513 kubelet[7846]: E1002 20:12:35.101601 7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dlfsg" podUID="e55b6bd9-93b5-47cf-bb1d-cc5a9e41aa9a"
Oct 02 20:12:41 functional-460513 kubelet[7846]: E1002 20:12:41.100289 7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
Oct 02 20:12:41 functional-460513 kubelet[7846]: E1002 20:12:41.100329 7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-s8zx4" podUID="7076f721-7fec-48cb-b884-2ff8c9abbcd2"
Oct 02 20:12:43 functional-460513 kubelet[7846]: E1002 20:12:43.101553 7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-s9ptn" podUID="af01a2e9-cd27-4c95-a09b-56995a56ee5a"
Oct 02 20:12:47 functional-460513 kubelet[7846]: E1002 20:12:47.102192 7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dlfsg" podUID="e55b6bd9-93b5-47cf-bb1d-cc5a9e41aa9a"
Oct 02 20:12:48 functional-460513 kubelet[7846]: E1002 20:12:48.099802 7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
Oct 02 20:12:56 functional-460513 kubelet[7846]: E1002 20:12:56.103437 7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
Oct 02 20:12:56 functional-460513 kubelet[7846]: E1002 20:12:56.426033 7846 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
Oct 02 20:12:56 functional-460513 kubelet[7846]: E1002 20:12:56.426089 7846 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
Oct 02 20:12:56 functional-460513 kubelet[7846]: E1002 20:12:56.426173 7846 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-s8zx4_default(7076f721-7fec-48cb-b884-2ff8c9abbcd2): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
Oct 02 20:12:56 functional-460513 kubelet[7846]: E1002 20:12:56.426204 7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-s8zx4" podUID="7076f721-7fec-48cb-b884-2ff8c9abbcd2"
Oct 02 20:12:57 functional-460513 kubelet[7846]: E1002 20:12:57.101973 7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-s9ptn" podUID="af01a2e9-cd27-4c95-a09b-56995a56ee5a"
Oct 02 20:12:59 functional-460513 kubelet[7846]: E1002 20:12:59.099435 7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
Oct 02 20:13:00 functional-460513 kubelet[7846]: E1002 20:13:00.155823 7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dlfsg" podUID="e55b6bd9-93b5-47cf-bb1d-cc5a9e41aa9a"
Oct 02 20:13:09 functional-460513 kubelet[7846]: E1002 20:13:09.099514 7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-s8zx4" podUID="7076f721-7fec-48cb-b884-2ff8c9abbcd2"
Oct 02 20:13:09 functional-460513 kubelet[7846]: E1002 20:13:09.101570 7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-s9ptn" podUID="af01a2e9-cd27-4c95-a09b-56995a56ee5a"
Oct 02 20:13:10 functional-460513 kubelet[7846]: E1002 20:13:10.100619 7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
Oct 02 20:13:11 functional-460513 kubelet[7846]: E1002 20:13:11.099475 7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
Oct 02 20:13:13 functional-460513 kubelet[7846]: E1002 20:13:13.101293 7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dlfsg" podUID="e55b6bd9-93b5-47cf-bb1d-cc5a9e41aa9a"
Oct 02 20:13:20 functional-460513 kubelet[7846]: E1002 20:13:20.099817 7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-s8zx4" podUID="7076f721-7fec-48cb-b884-2ff8c9abbcd2"
Oct 02 20:13:22 functional-460513 kubelet[7846]: E1002 20:13:22.108726 7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
==> storage-provisioner [33f4fa437242] <==
W1002 20:12:57.411911 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:12:59.415354 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:12:59.422414 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:13:01.426365 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:13:01.433453 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:13:03.436209 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:13:03.441114 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:13:05.443954 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:13:05.451253 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:13:07.454929 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:13:07.459617 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:13:09.462936 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:13:09.467436 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:13:11.470747 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:13:11.475418 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:13:13.478366 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:13:13.485586 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:13:15.488864 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:13:15.493573 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:13:17.496574 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:13:17.501439 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:13:19.509625 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:13:19.515589 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:13:21.518601 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 20:13:21.524275 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
==> storage-provisioner [5e610b6f5c95] <==
W1002 19:56:45.555569 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:56:45.562256 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:56:47.582004 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:56:47.589875 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:56:49.592960 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:56:49.598092 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:56:51.600865 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:56:51.608664 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:56:53.612538 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:56:53.618914 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:56:55.622089 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:56:55.627118 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:56:57.630513 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:56:57.635573 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:56:59.638282 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:56:59.643254 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:57:01.646302 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:57:01.653892 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:57:03.657134 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:57:03.662085 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:57:05.665042 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:57:05.669486 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:57:07.672971 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 19:57:07.679989 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
E1002 19:57:09.680844 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-460513 -n functional-460513
helpers_test.go:269: (dbg) Run: kubectl --context functional-460513 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-s8zx4 hello-node-connect-7d85dfc575-85j8h sp-pod dashboard-metrics-scraper-77bf4d6c4c-s9ptn kubernetes-dashboard-855c9754f9-dlfsg
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context functional-460513 describe pod busybox-mount hello-node-75c85bcc94-s8zx4 hello-node-connect-7d85dfc575-85j8h sp-pod dashboard-metrics-scraper-77bf4d6c4c-s9ptn kubernetes-dashboard-855c9754f9-dlfsg
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-460513 describe pod busybox-mount hello-node-75c85bcc94-s8zx4 hello-node-connect-7d85dfc575-85j8h sp-pod dashboard-metrics-scraper-77bf4d6c4c-s9ptn kubernetes-dashboard-855c9754f9-dlfsg: exit status 1 (117.870372ms)
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-460513/192.168.49.2
Start Time: Thu, 02 Oct 2025 20:08:09 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Succeeded
IP: 10.244.0.11
IPs:
IP: 10.244.0.11
Containers:
mount-munger:
Container ID: docker://623723a43c0770107afec46f91c2942c306af901014995772650f46fc90a1257
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID: docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 02 Oct 2025 20:08:11 +0000
Finished: Thu, 02 Oct 2025 20:08:11 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dtrfr (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-dtrfr:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m13s default-scheduler Successfully assigned default/busybox-mount to functional-460513
Normal Pulling 5m13s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal Pulled 5m11s kubelet Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.134s (2.134s including waiting). Image size: 3547125 bytes.
Normal Created 5m11s kubelet Created container: mount-munger
Normal Started 5m11s kubelet Started container mount-munger
Name: hello-node-75c85bcc94-s8zx4
Namespace: default
Priority: 0
Service Account: default
Node: functional-460513/192.168.49.2
Start Time: Thu, 02 Oct 2025 20:02:02 +0000
Labels: app=hello-node
pod-template-hash=75c85bcc94
Annotations: <none>
Status: Pending
IP: 10.244.0.10
IPs:
IP: 10.244.0.10
Controlled By: ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:
Image: kicbase/echo-server
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cg2d6 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-cg2d6:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned default/hello-node-75c85bcc94-s8zx4 to functional-460513
Warning Failed 9m53s (x3 over 11m) kubelet Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal Pulling 8m27s (x5 over 11m) kubelet Pulling image "kicbase/echo-server"
Warning Failed 8m27s (x5 over 11m) kubelet Error: ErrImagePull
Warning Failed 8m27s (x2 over 11m) kubelet Failed to pull image "kicbase/echo-server": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal BackOff 78s (x43 over 11m) kubelet Back-off pulling image "kicbase/echo-server"
Warning Failed 78s (x43 over 11m) kubelet Error: ImagePullBackOff
Name: hello-node-connect-7d85dfc575-85j8h
Namespace: default
Priority: 0
Service Account: default
Node: functional-460513/192.168.49.2
Start Time: Thu, 02 Oct 2025 19:58:02 +0000
Labels: app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations: <none>
Status: Pending
IP: 10.244.0.9
IPs:
IP: 10.244.0.9
Controlled By: ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:
Image: kicbase/echo-server
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ps69t (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-ps69t:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 15m default-scheduler Successfully assigned default/hello-node-connect-7d85dfc575-85j8h to functional-460513
Normal Pulling 12m (x5 over 15m) kubelet Pulling image "kicbase/echo-server"
Warning Failed 12m (x5 over 15m) kubelet Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 12m (x5 over 15m) kubelet Error: ErrImagePull
Normal BackOff 11s (x64 over 15m) kubelet Back-off pulling image "kicbase/echo-server"
Warning Failed 11s (x64 over 15m) kubelet Error: ImagePullBackOff
Name: sp-pod
Namespace: default
Priority: 0
Service Account: default
Node: functional-460513/192.168.49.2
Start Time: Thu, 02 Oct 2025 19:57:59 +0000
Labels: test=storage-provisioner
Annotations: <none>
Status: Pending
IP: 10.244.0.8
IPs:
IP: 10.244.0.8
Containers:
myfrontend:
Container ID:
Image: docker.io/nginx
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8qj7g (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mypd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: myclaim
ReadOnly: false
kube-api-access-8qj7g:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 15m default-scheduler Successfully assigned default/sp-pod to functional-460513
Warning Failed 14m (x3 over 15m) kubelet Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal Pulling 12m (x5 over 15m) kubelet Pulling image "docker.io/nginx"
Warning Failed 12m (x5 over 15m) kubelet Error: ErrImagePull
Warning Failed 12m (x2 over 14m) kubelet Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal BackOff 12s (x65 over 15m) kubelet Back-off pulling image "docker.io/nginx"
Warning Failed 12s (x65 over 15m) kubelet Error: ImagePullBackOff
-- /stdout --
** stderr **
Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-s9ptn" not found
Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-dlfsg" not found
** /stderr **
helpers_test.go:287: kubectl --context functional-460513 describe pod busybox-mount hello-node-75c85bcc94-s8zx4 hello-node-connect-7d85dfc575-85j8h sp-pod dashboard-metrics-scraper-77bf4d6c4c-s9ptn kubernetes-dashboard-855c9754f9-dlfsg: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.32s)