=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-304107 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-304107 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-304107 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-304107 --alsologtostderr -v=1] stderr:
I1207 22:38:12.856255 448213 out.go:360] Setting OutFile to fd 1 ...
I1207 22:38:12.856550 448213 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:38:12.856562 448213 out.go:374] Setting ErrFile to fd 2...
I1207 22:38:12.856569 448213 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:38:12.856823 448213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
I1207 22:38:12.857051 448213 mustload.go:66] Loading cluster: functional-304107
I1207 22:38:12.857473 448213 config.go:182] Loaded profile config "functional-304107": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1207 22:38:12.858003 448213 cli_runner.go:164] Run: docker container inspect functional-304107 --format={{.State.Status}}
I1207 22:38:12.878154 448213 host.go:66] Checking if "functional-304107" exists ...
I1207 22:38:12.878428 448213 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1207 22:38:12.944543 448213 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-07 22:38:12.932689259 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1207 22:38:12.944680 448213 api_server.go:166] Checking apiserver status ...
I1207 22:38:12.944727 448213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1207 22:38:12.944762 448213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-304107
I1207 22:38:12.965428 448213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-304107/id_rsa Username:docker}
I1207 22:38:13.073992 448213 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/9173/cgroup
W1207 22:38:13.084626 448213 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/9173/cgroup: Process exited with status 1
stdout:
stderr:
I1207 22:38:13.084683 448213 ssh_runner.go:195] Run: ls
I1207 22:38:13.089341 448213 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1207 22:38:13.095574 448213 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1207 22:38:13.095658 448213 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1207 22:38:13.095881 448213 config.go:182] Loaded profile config "functional-304107": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1207 22:38:13.095915 448213 addons.go:70] Setting dashboard=true in profile "functional-304107"
I1207 22:38:13.095930 448213 addons.go:239] Setting addon dashboard=true in "functional-304107"
I1207 22:38:13.095971 448213 host.go:66] Checking if "functional-304107" exists ...
I1207 22:38:13.096492 448213 cli_runner.go:164] Run: docker container inspect functional-304107 --format={{.State.Status}}
I1207 22:38:13.120289 448213 out.go:179] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1207 22:38:13.121696 448213 out.go:179] - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1207 22:38:13.123060 448213 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1207 22:38:13.123082 448213 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1207 22:38:13.123149 448213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-304107
I1207 22:38:13.143790 448213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-304107/id_rsa Username:docker}
I1207 22:38:13.252720 448213 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1207 22:38:13.252746 448213 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1207 22:38:13.266839 448213 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1207 22:38:13.266867 448213 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1207 22:38:13.282195 448213 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1207 22:38:13.282221 448213 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1207 22:38:13.296522 448213 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1207 22:38:13.296548 448213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1207 22:38:13.311052 448213 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1207 22:38:13.311081 448213 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1207 22:38:13.325810 448213 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1207 22:38:13.325838 448213 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1207 22:38:13.340937 448213 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1207 22:38:13.340966 448213 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1207 22:38:13.356632 448213 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1207 22:38:13.356659 448213 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1207 22:38:13.372962 448213 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1207 22:38:13.372987 448213 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1207 22:38:13.387042 448213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1207 22:38:13.917423 448213 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-304107 addons enable metrics-server
I1207 22:38:13.918723 448213 addons.go:202] Writing out "functional-304107" config to set dashboard=true...
W1207 22:38:13.918984 448213 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1207 22:38:13.919557 448213 kapi.go:59] client config for functional-304107: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.key", CAFile:"/home/jenkins/minikube-integration/22054-393577/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1207 22:38:13.920037 448213 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1207 22:38:13.920069 448213 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1207 22:38:13.920077 448213 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1207 22:38:13.920086 448213 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1207 22:38:13.920091 448213 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1207 22:38:13.929247 448213 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard 99bce6f0-9463-4018-b0e3-5f0a2c287ce8 879 0 2025-12-07 22:38:13 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-07 22:38:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.96.232.143,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.96.232.143],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1207 22:38:13.929380 448213 out.go:285] * Launching proxy ...
* Launching proxy ...
I1207 22:38:13.929430 448213 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-304107 proxy --port 36195]
I1207 22:38:13.929712 448213 dashboard.go:159] Waiting for kubectl to output host:port ...
I1207 22:38:13.984055 448213 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1207 22:38:13.984135 448213 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1207 22:38:13.993839 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cf26a0c6-c6d9-4300-989f-fecadade19cf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:13 GMT]] Body:0xc0008c6700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014e8c0 TLS:<nil>}
I1207 22:38:13.993949 448213 retry.go:31] will retry after 59.863µs: Temporary Error: unexpected response code: 503
I1207 22:38:13.997657 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b8b03d91-3852-40dc-995f-1ae459adc857] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:13 GMT]] Body:0xc0009ab680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208c80 TLS:<nil>}
I1207 22:38:13.997732 448213 retry.go:31] will retry after 117.573µs: Temporary Error: unexpected response code: 503
I1207 22:38:14.001333 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[15395964-f34d-4781-ab85-b24cc1416519] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc000938840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000368000 TLS:<nil>}
I1207 22:38:14.001403 448213 retry.go:31] will retry after 114.049µs: Temporary Error: unexpected response code: 503
I1207 22:38:14.004847 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7c7ca8db-bebe-452e-b597-e3c8bfb316b9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc0009ab800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014ea00 TLS:<nil>}
I1207 22:38:14.004908 448213 retry.go:31] will retry after 249.432µs: Temporary Error: unexpected response code: 503
I1207 22:38:14.008175 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e236c3af-b832-4d94-9577-451b2c3bbf51] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc0008c6800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000368140 TLS:<nil>}
I1207 22:38:14.008230 448213 retry.go:31] will retry after 666.034µs: Temporary Error: unexpected response code: 503
I1207 22:38:14.011825 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f3a28c11-ec72-47b2-9741-8272946e0747] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc000938a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208f00 TLS:<nil>}
I1207 22:38:14.011879 448213 retry.go:31] will retry after 1.01759ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.015186 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aaa914bf-59b8-4f15-8d2f-55620db50abf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc0009abd40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014eb40 TLS:<nil>}
I1207 22:38:14.015265 448213 retry.go:31] will retry after 1.037014ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.018464 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ee66e53f-7067-4b0e-bba6-25924699b13a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc0008c68c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000368280 TLS:<nil>}
I1207 22:38:14.018521 448213 retry.go:31] will retry after 2.336945ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.023782 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[92ecbf7a-ead6-4ccd-9f08-e205fb1a73ee] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc0009abe40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209180 TLS:<nil>}
I1207 22:38:14.023839 448213 retry.go:31] will retry after 2.412507ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.029031 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6a56f758-03a3-4475-9acc-57e84e783f6b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc000938b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003683c0 TLS:<nil>}
I1207 22:38:14.029080 448213 retry.go:31] will retry after 3.97432ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.035410 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[20d456c4-2534-4a2d-8929-5581e50358b1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc0009abf40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014ec80 TLS:<nil>}
I1207 22:38:14.035460 448213 retry.go:31] will retry after 3.305187ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.041957 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[79a38600-7ceb-46ca-a22d-2058ad3cf893] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc0008c69c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000368500 TLS:<nil>}
I1207 22:38:14.042016 448213 retry.go:31] will retry after 8.790709ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.054180 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[49322e45-81b9-4723-b31c-80c2642adf4d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc0008c6a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002092c0 TLS:<nil>}
I1207 22:38:14.054263 448213 retry.go:31] will retry after 15.059473ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.072157 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[40e6be51-293e-43a3-8cca-fefb28bca097] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc000445b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209400 TLS:<nil>}
I1207 22:38:14.072231 448213 retry.go:31] will retry after 24.1118ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.099292 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[daade65d-5190-4bb9-b8c1-371a4e794aed] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc000445d00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d4500 TLS:<nil>}
I1207 22:38:14.099380 448213 retry.go:31] will retry after 37.685925ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.140700 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ac355170-a902-4029-9693-5b1c36cf1b47] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc000445e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d4780 TLS:<nil>}
I1207 22:38:14.140774 448213 retry.go:31] will retry after 36.15274ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.181066 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[83a91787-371f-4ef7-8f7f-d01a28f3bb27] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc0008c6b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d48c0 TLS:<nil>}
I1207 22:38:14.181170 448213 retry.go:31] will retry after 45.443472ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.230920 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7f8fc8b6-4635-472f-beec-971d8fe3d364] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc001722000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209540 TLS:<nil>}
I1207 22:38:14.230991 448213 retry.go:31] will retry after 71.96418ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.306805 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d66362c3-905a-43f0-9011-6d14d90743dc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc000938d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d4a00 TLS:<nil>}
I1207 22:38:14.306884 448213 retry.go:31] will retry after 210.770262ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.521296 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[505be831-14a1-465c-b430-0d3ec50ea905] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc001722100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014ef00 TLS:<nil>}
I1207 22:38:14.521387 448213 retry.go:31] will retry after 317.141075ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.842099 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a6d71b68-ec62-41e6-9567-24fb7dbdd057] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc000938e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d4b40 TLS:<nil>}
I1207 22:38:14.842177 448213 retry.go:31] will retry after 304.927123ms: Temporary Error: unexpected response code: 503
I1207 22:38:15.151031 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[601f3e03-1332-4166-8ef6-682c9075cec1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:15 GMT]] Body:0xc001722200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014f040 TLS:<nil>}
I1207 22:38:15.151111 448213 retry.go:31] will retry after 317.466762ms: Temporary Error: unexpected response code: 503
I1207 22:38:15.473581 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d5fd1c9d-6adb-4aa2-bc9e-3481af6d391d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:15 GMT]] Body:0xc000939000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d4c80 TLS:<nil>}
I1207 22:38:15.473685 448213 retry.go:31] will retry after 971.021144ms: Temporary Error: unexpected response code: 503
I1207 22:38:16.448569 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dfe408eb-9c88-49cc-aa1e-f714b76be5bf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:16 GMT]] Body:0xc0008c6f40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014f180 TLS:<nil>}
I1207 22:38:16.448650 448213 retry.go:31] will retry after 995.666431ms: Temporary Error: unexpected response code: 503
I1207 22:38:17.447680 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[72e8da47-6a7d-4144-bd1f-0b07b0f5c1ac] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:17 GMT]] Body:0xc001722300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209680 TLS:<nil>}
I1207 22:38:17.447755 448213 retry.go:31] will retry after 1.120590543s: Temporary Error: unexpected response code: 503
I1207 22:38:18.572054 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4a91fc17-84c5-4ef0-ab78-7f274a0200b9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:18 GMT]] Body:0xc0018980c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d4dc0 TLS:<nil>}
I1207 22:38:18.572134 448213 retry.go:31] will retry after 2.604835681s: Temporary Error: unexpected response code: 503
I1207 22:38:21.182730 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ccc03670-4cc1-437e-bb92-d655d39587cc] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:21 GMT]] Body:0xc000939100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002097c0 TLS:<nil>}
I1207 22:38:21.182804 448213 retry.go:31] will retry after 2.530331176s: Temporary Error: unexpected response code: 503
I1207 22:38:23.717422 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c453747e-9cf9-459b-82bd-267304549f71] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:23 GMT]] Body:0xc0018981c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014f2c0 TLS:<nil>}
I1207 22:38:23.717498 448213 retry.go:31] will retry after 2.935087579s: Temporary Error: unexpected response code: 503
I1207 22:38:26.656257 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a69b956f-5843-48f6-af57-5094f30b4610] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:26 GMT]] Body:0xc001898240 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014f400 TLS:<nil>}
I1207 22:38:26.656357 448213 retry.go:31] will retry after 7.498770579s: Temporary Error: unexpected response code: 503
I1207 22:38:34.159052 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1d82ca55-9535-4e64-8944-9e13676a6141] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:34 GMT]] Body:0xc001898340 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209900 TLS:<nil>}
I1207 22:38:34.159128 448213 retry.go:31] will retry after 18.354015196s: Temporary Error: unexpected response code: 503
I1207 22:38:52.520090 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[550c8f02-b429-4b7c-b896-7935da6e23de] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:52 GMT]] Body:0xc000939280 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209a40 TLS:<nil>}
I1207 22:38:52.520166 448213 retry.go:31] will retry after 17.8186629s: Temporary Error: unexpected response code: 503
I1207 22:39:10.344489 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8252a00b-f57b-41c4-bd30-854e20978bc0] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:39:10 GMT]] Body:0xc001722440 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014f680 TLS:<nil>}
I1207 22:39:10.344576 448213 retry.go:31] will retry after 25.771615049s: Temporary Error: unexpected response code: 503
I1207 22:39:36.119623 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4cec0bd9-067b-454e-af56-bebca988a8bd] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:39:36 GMT]] Body:0xc0017224c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014f7c0 TLS:<nil>}
I1207 22:39:36.119707 448213 retry.go:31] will retry after 29.676097463s: Temporary Error: unexpected response code: 503
I1207 22:40:05.799705 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[300296ff-3bda-441e-ba0f-38709ce6105c] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:40:05 GMT]] Body:0xc001722540 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014f900 TLS:<nil>}
I1207 22:40:05.799786 448213 retry.go:31] will retry after 36.297604507s: Temporary Error: unexpected response code: 503
I1207 22:40:42.102205 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1d213b29-fdab-458e-9f67-1b7e4d8c6321] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:40:42 GMT]] Body:0xc001722040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000368640 TLS:<nil>}
I1207 22:40:42.102305 448213 retry.go:31] will retry after 53.053244496s: Temporary Error: unexpected response code: 503
I1207 22:41:35.159121 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bcfc5801-1fe8-4c2e-a4b9-1995efd8cac8] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:41:35 GMT]] Body:0xc001722140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014e140 TLS:<nil>}
I1207 22:41:35.159207 448213 retry.go:31] will retry after 48.259824487s: Temporary Error: unexpected response code: 503
I1207 22:42:23.423420 448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3f964db3-6505-4852-bd39-4be7b391b245] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:42:23 GMT]] Body:0xc00181a140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014e280 TLS:<nil>}
I1207 22:42:23.423517 448213 retry.go:31] will retry after 1m16.127328834s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect functional-304107
helpers_test.go:243: (dbg) docker inspect functional-304107:
-- stdout --
[
{
"Id": "769725322f76d88988e043b1070348920134aa3ad078d15289d551e08a685fb9",
"Created": "2025-12-07T22:35:17.716324358Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 428375,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-07T22:35:17.756587169Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
"ResolvConfPath": "/var/lib/docker/containers/769725322f76d88988e043b1070348920134aa3ad078d15289d551e08a685fb9/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/769725322f76d88988e043b1070348920134aa3ad078d15289d551e08a685fb9/hostname",
"HostsPath": "/var/lib/docker/containers/769725322f76d88988e043b1070348920134aa3ad078d15289d551e08a685fb9/hosts",
"LogPath": "/var/lib/docker/containers/769725322f76d88988e043b1070348920134aa3ad078d15289d551e08a685fb9/769725322f76d88988e043b1070348920134aa3ad078d15289d551e08a685fb9-json.log",
"Name": "/functional-304107",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-304107:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "functional-304107",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "private",
"Dns": null,
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": null,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "769725322f76d88988e043b1070348920134aa3ad078d15289d551e08a685fb9",
"LowerDir": "/var/lib/docker/overlay2/217300164bc4977a7c5d3e80bed0f494a3eb7d2123ea021d2e19e11a0ffb582c-init/diff:/var/lib/docker/overlay2/72e2c0d34d3438044c6ca8754190358557351efc0aeb527bd1060ce52e748152/diff",
"MergedDir": "/var/lib/docker/overlay2/217300164bc4977a7c5d3e80bed0f494a3eb7d2123ea021d2e19e11a0ffb582c/merged",
"UpperDir": "/var/lib/docker/overlay2/217300164bc4977a7c5d3e80bed0f494a3eb7d2123ea021d2e19e11a0ffb582c/diff",
"WorkDir": "/var/lib/docker/overlay2/217300164bc4977a7c5d3e80bed0f494a3eb7d2123ea021d2e19e11a0ffb582c/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-304107",
"Source": "/var/lib/docker/volumes/functional-304107/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-304107",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-304107",
"name.minikube.sigs.k8s.io": "functional-304107",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"SandboxID": "e3ef7a0b3f8947e2c4fff10e59f55e8dc43d75595ece1feeea31d83e45513ae7",
"SandboxKey": "/var/run/docker/netns/e3ef7a0b3f89",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33162"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33163"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33166"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33164"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33165"
}
]
},
"Networks": {
"functional-304107": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2",
"IPv6Address": ""
},
"Links": null,
"Aliases": null,
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "4f86ce54efb9121df084309ae3492628b6ce2282fe48f7117090c21b5dae7084",
"EndpointID": "3682bfeabb8df07590c63050c4c59c5ed08fee3a520ae01b51f1dfeef06b031a",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"MacAddress": "5e:c9:74:e6:1c:3f",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-304107",
"769725322f76"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-304107 -n functional-304107
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p functional-304107 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-304107 logs -n 25: (1.049680895s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
==> Audit <==
┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ functional-304107 ssh findmnt -T /mount1 │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ │
│ mount │ -p functional-304107 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4148170977/001:/mount2 --alsologtostderr -v=1 │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ │
│ mount │ -p functional-304107 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4148170977/001:/mount1 --alsologtostderr -v=1 │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ │
│ ssh │ functional-304107 ssh findmnt -T /mount1 │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
│ ssh │ functional-304107 ssh findmnt -T /mount2 │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
│ ssh │ functional-304107 ssh findmnt -T /mount3 │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
│ mount │ -p functional-304107 --kill=true │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ │
│ cp │ functional-304107 cp testdata/cp-test.txt /home/docker/cp-test.txt │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
│ ssh │ functional-304107 ssh -n functional-304107 sudo cat /home/docker/cp-test.txt │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
│ cp │ functional-304107 cp functional-304107:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1873481159/001/cp-test.txt │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
│ ssh │ functional-304107 ssh -n functional-304107 sudo cat /home/docker/cp-test.txt │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
│ cp │ functional-304107 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
│ ssh │ functional-304107 ssh -n functional-304107 sudo cat /tmp/does/not/exist/cp-test.txt │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
│ ssh │ functional-304107 ssh echo hello │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
│ ssh │ functional-304107 ssh cat /etc/hostname │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
│ image │ functional-304107 image ls --format short --alsologtostderr │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
│ image │ functional-304107 image ls --format yaml --alsologtostderr │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
│ ssh │ functional-304107 ssh pgrep buildkitd │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ │
│ image │ functional-304107 image ls --format json --alsologtostderr │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
│ image │ functional-304107 image build -t localhost/my-image:functional-304107 testdata/build --alsologtostderr │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
│ image │ functional-304107 image ls --format table --alsologtostderr │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
│ update-context │ functional-304107 update-context --alsologtostderr -v=2 │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
│ update-context │ functional-304107 update-context --alsologtostderr -v=2 │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
│ update-context │ functional-304107 update-context --alsologtostderr -v=2 │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
│ image │ functional-304107 image ls │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/07 22:38:12
Running on machine: ubuntu-20-agent-6
Binary: Built with gc go1.25.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1207 22:38:12.603145 448076 out.go:360] Setting OutFile to fd 1 ...
I1207 22:38:12.603268 448076 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:38:12.603276 448076 out.go:374] Setting ErrFile to fd 2...
I1207 22:38:12.603281 448076 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:38:12.603518 448076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
I1207 22:38:12.604030 448076 out.go:368] Setting JSON to false
I1207 22:38:12.605523 448076 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4836,"bootTime":1765142257,"procs":260,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1207 22:38:12.605620 448076 start.go:143] virtualization: kvm guest
I1207 22:38:12.607813 448076 out.go:179] * [functional-304107] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1207 22:38:12.609154 448076 out.go:179] - MINIKUBE_LOCATION=22054
I1207 22:38:12.609137 448076 notify.go:221] Checking for updates...
I1207 22:38:12.610372 448076 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1207 22:38:12.611730 448076 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22054-393577/kubeconfig
I1207 22:38:12.613006 448076 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-393577/.minikube
I1207 22:38:12.614553 448076 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1207 22:38:12.615836 448076 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1207 22:38:12.617450 448076 config.go:182] Loaded profile config "functional-304107": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1207 22:38:12.618102 448076 driver.go:422] Setting default libvirt URI to qemu:///system
I1207 22:38:12.646801 448076 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
I1207 22:38:12.646917 448076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1207 22:38:12.711095 448076 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-07 22:38:12.699458963 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1207 22:38:12.711265 448076 docker.go:319] overlay module found
I1207 22:38:12.713410 448076 out.go:179] * Using the docker driver based on existing profile
I1207 22:38:12.714638 448076 start.go:309] selected driver: docker
I1207 22:38:12.714655 448076 start.go:927] validating driver "docker" against &{Name:functional-304107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-304107 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1207 22:38:12.714784 448076 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1207 22:38:12.714913 448076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1207 22:38:12.783048 448076 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-07 22:38:12.7697066 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1207 22:38:12.784020 448076 cni.go:84] Creating CNI manager for ""
I1207 22:38:12.784118 448076 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1207 22:38:12.784214 448076 start.go:353] cluster config:
{Name:functional-304107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-304107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1207 22:38:12.786367 448076 out.go:179] * dry-run validation complete!
==> Docker <==
Dec 07 22:38:19 functional-304107 dockerd[7426]: time="2025-12-07T22:38:19.322999997Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Dec 07 22:38:19 functional-304107 dockerd[7426]: time="2025-12-07T22:38:19.814535375Z" level=info msg="ignoring event" container=c3712c8864cda85cca1c2c040e753ee71cd6cea918c94a1c4b1b1763ab3f86ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 07 22:38:20 functional-304107 dockerd[7426]: time="2025-12-07T22:38:20.077445751Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Dec 07 22:38:20 functional-304107 cri-dockerd[7726]: time="2025-12-07T22:38:20Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
Dec 07 22:38:20 functional-304107 dockerd[7426]: time="2025-12-07T22:38:20.319691632Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Dec 07 22:38:20 functional-304107 dockerd[7426]: time="2025-12-07T22:38:20.800569904Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Dec 07 22:38:28 functional-304107 dockerd[7426]: 2025/12/07 22:38:28 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
Dec 07 22:38:30 functional-304107 dockerd[7426]: time="2025-12-07T22:38:30.401741382Z" level=info msg="sbJoin: gwep4 ''->'2e3c7f304abe', gwep6 ''->''"
Dec 07 22:38:33 functional-304107 dockerd[7426]: time="2025-12-07T22:38:33.013919707Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Dec 07 22:38:33 functional-304107 dockerd[7426]: time="2025-12-07T22:38:33.490633178Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Dec 07 22:38:36 functional-304107 dockerd[7426]: time="2025-12-07T22:38:36.012893226Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Dec 07 22:38:36 functional-304107 dockerd[7426]: time="2025-12-07T22:38:36.491911678Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Dec 07 22:39:02 functional-304107 dockerd[7426]: time="2025-12-07T22:39:02.011735339Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Dec 07 22:39:02 functional-304107 dockerd[7426]: time="2025-12-07T22:39:02.496904823Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Dec 07 22:39:02 functional-304107 dockerd[7426]: time="2025-12-07T22:39:02.744128567Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Dec 07 22:39:03 functional-304107 dockerd[7426]: time="2025-12-07T22:39:03.223445801Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Dec 07 22:39:50 functional-304107 dockerd[7426]: time="2025-12-07T22:39:50.011704069Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Dec 07 22:39:50 functional-304107 dockerd[7426]: time="2025-12-07T22:39:50.488213758Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Dec 07 22:39:53 functional-304107 dockerd[7426]: time="2025-12-07T22:39:53.009618685Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Dec 07 22:39:53 functional-304107 dockerd[7426]: time="2025-12-07T22:39:53.492482277Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Dec 07 22:41:14 functional-304107 dockerd[7426]: time="2025-12-07T22:41:14.016685028Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Dec 07 22:41:14 functional-304107 dockerd[7426]: time="2025-12-07T22:41:14.497253088Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Dec 07 22:41:25 functional-304107 dockerd[7426]: time="2025-12-07T22:41:25.011550726Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Dec 07 22:41:25 functional-304107 dockerd[7426]: time="2025-12-07T22:41:25.768912175Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Dec 07 22:41:25 functional-304107 cri-dockerd[7726]: time="2025-12-07T22:41:25Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
0c14fafe1a9f1 nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42 4 minutes ago Running myfrontend 0 1cd7ae278a95d sp-pod default
bb849dc65ed8c gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 4 minutes ago Exited mount-munger 0 c3712c8864cda busybox-mount default
f745833c35485 mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb 4 minutes ago Running mysql 0 8e67c5ed948ca mysql-5bb876957f-4jlkm default
4da267e87e100 kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 5 minutes ago Running echo-server 0 513192ac0d927 hello-node-75c85bcc94-lsdfr default
02a98c1614cdd kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 5 minutes ago Running echo-server 0 7c671e3ae4909 hello-node-connect-7d85dfc575-bw6s8 default
f1647ace06bf4 nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14 5 minutes ago Running nginx 0 5bbe6db70914c nginx-svc default
6f84c16817923 52546a367cc9e 5 minutes ago Running coredns 2 3b7754c917be7 coredns-66bc5c9577-4qp99 kube-system
822f5ff4ed500 6e38f40d628db 5 minutes ago Running storage-provisioner 3 4c45eea6d7266 storage-provisioner kube-system
02e2d85da0b9f 8aa150647e88a 5 minutes ago Running kube-proxy 2 12d75babeeb60 kube-proxy-pd5wd kube-system
ffaa90d8d1d60 a3e246e9556e9 5 minutes ago Running etcd 2 2d5d029e04d9a etcd-functional-304107 kube-system
6b3c8ac7211b2 01e8bacf0f500 5 minutes ago Running kube-controller-manager 2 f7bdd00fb369c kube-controller-manager-functional-304107 kube-system
41bbee6a06fdf 88320b5498ff2 5 minutes ago Running kube-scheduler 2 cbf972909b91d kube-scheduler-functional-304107 kube-system
dba7457ece939 a5f569d49a979 5 minutes ago Running kube-apiserver 0 a60ba30d5690b kube-apiserver-functional-304107 kube-system
5e90eed3fbdde 6e38f40d628db 6 minutes ago Exited storage-provisioner 2 dad57689661d5 storage-provisioner kube-system
f18e018fee324 52546a367cc9e 6 minutes ago Exited coredns 1 7af2649c52925 coredns-66bc5c9577-4qp99 kube-system
e48c781da85a4 8aa150647e88a 6 minutes ago Exited kube-proxy 1 7a6e7aad25963 kube-proxy-pd5wd kube-system
77968cab8a677 a3e246e9556e9 6 minutes ago Exited etcd 1 15f7444f6a22b etcd-functional-304107 kube-system
db300a51b23f0 01e8bacf0f500 6 minutes ago Exited kube-controller-manager 1 3d83f012e661e kube-controller-manager-functional-304107 kube-system
41fa6477afc27 88320b5498ff2 6 minutes ago Exited kube-scheduler 1 0b4dd64d6231e kube-scheduler-functional-304107 kube-system
==> coredns [6f84c1681792] <==
maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:49309 - 23167 "HINFO IN 5352894005535060145.8675857745092316878. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027266123s
==> coredns [f18e018fee32] <==
maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:44003 - 30256 "HINFO IN 4101114324048550541.6981339967762851229. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021879745s
==> describe nodes <==
Name: functional-304107
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-304107
kubernetes.io/os=linux
minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
minikube.k8s.io/name=functional-304107
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_07T22_35_34_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 07 Dec 2025 22:35:32 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-304107
AcquireTime: <unset>
RenewTime: Sun, 07 Dec 2025 22:43:12 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sun, 07 Dec 2025 22:38:58 +0000 Sun, 07 Dec 2025 22:35:30 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 07 Dec 2025 22:38:58 +0000 Sun, 07 Dec 2025 22:35:30 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 07 Dec 2025 22:38:58 +0000 Sun, 07 Dec 2025 22:35:30 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 07 Dec 2025 22:38:58 +0000 Sun, 07 Dec 2025 22:35:37 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-304107
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863360Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863360Ki
pods: 110
System Info:
Machine ID: a6e66d6047cad46f36f1a6e369316001
System UUID: 3a53179d-6e12-4880-a549-d2e469b40494
Boot ID: 10618540-d4ef-4c75-8cf1-8b1c0379fe5e
Kernel Version: 6.8.0-1044-gcp
OS Image: Debian GNU/Linux 12 (bookworm)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://29.1.2
Kubelet Version: v1.34.2
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-75c85bcc94-lsdfr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m17s
default hello-node-connect-7d85dfc575-bw6s8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m22s
default mysql-5bb876957f-4jlkm 600m (7%) 700m (8%) 512Mi (1%) 700Mi (2%) 5m9s
default nginx-svc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m24s
default sp-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m5s
kube-system coredns-66bc5c9577-4qp99 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 7m33s
kube-system etcd-functional-304107 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 7m39s
kube-system kube-apiserver-functional-304107 250m (3%) 0 (0%) 0 (0%) 0 (0%) 5m45s
kube-system kube-controller-manager-functional-304107 200m (2%) 0 (0%) 0 (0%) 0 (0%) 7m39s
kube-system kube-proxy-pd5wd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m34s
kube-system kube-scheduler-functional-304107 100m (1%) 0 (0%) 0 (0%) 0 (0%) 7m40s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m33s
kubernetes-dashboard dashboard-metrics-scraper-77bf4d6c4c-fll54 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m
kubernetes-dashboard kubernetes-dashboard-855c9754f9-rgc2w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (16%) 700m (8%)
memory 682Mi (2%) 870Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 7m33s kube-proxy
Normal Starting 5m45s kube-proxy
Normal Starting 6m32s kube-proxy
Normal Starting 7m39s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 7m39s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 7m39s kubelet Node functional-304107 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m39s kubelet Node functional-304107 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m39s kubelet Node functional-304107 status is now: NodeHasSufficientPID
Normal NodeReady 7m36s kubelet Node functional-304107 status is now: NodeReady
Normal RegisteredNode 7m34s node-controller Node functional-304107 event: Registered Node functional-304107 in Controller
Warning ContainerGCFailed 6m39s kubelet rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Normal RegisteredNode 6m30s node-controller Node functional-304107 event: Registered Node functional-304107 in Controller
Normal Starting 5m49s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 5m49s (x8 over 5m49s) kubelet Node functional-304107 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m49s (x8 over 5m49s) kubelet Node functional-304107 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m49s (x7 over 5m49s) kubelet Node functional-304107 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m49s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 5m43s node-controller Node functional-304107 event: Registered Node functional-304107 in Controller
==> dmesg <==
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 bd e8 a2 e9 38 08 06
[ +4.371009] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 26 7f 89 eb 37 08 06
[Dec 7 22:32] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 90 44 62 17 5d 08 06
[ +0.000614] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff ba af 6f b2 4f 4e 08 06
[Dec 7 22:33] IPv4: martian source 10.244.0.1 from 10.244.0.31, on dev eth0
[ +0.000010] ll header: 00000000: ff ff ff ff ff ff 9a cf 5d 26 73 e5 08 06
[ +0.000688] IPv4: martian source 10.244.0.31 from 10.244.0.3, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff ba af 6f b2 4f 4e 08 06
[ +0.000675] IPv4: martian source 10.244.0.31 from 10.244.0.5, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff a2 23 e0 4c bb d1 08 06
[ +14.855650] IPv4: martian source 10.244.0.33 from 10.244.0.26, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 26 7f 89 eb 37 08 06
[ +1.290739] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff ba af 6f b2 4f 4e 08 06
[Dec 7 22:35] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 ed 23 6d c5 f1 08 06
[ +0.101054] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 62 04 91 35 35 08 06
[Dec 7 22:36] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff fe a9 7b 3e 23 12 08 06
[Dec 7 22:37] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff ea 06 21 81 6b 9f 08 06
==> etcd [77968cab8a67] <==
{"level":"warn","ts":"2025-12-07T22:36:39.291627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35040","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.298284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35044","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.307810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35070","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.314523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35094","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.321554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35106","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.328176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35118","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.336680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35134","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.344097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35148","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.350584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35162","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.358536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35182","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.368743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35194","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.376592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35200","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.384278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35224","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.399357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35250","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.413357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35260","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.420818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35276","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.428832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35304","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.436820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35326","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.443454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35346","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.450843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35362","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.458558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35370","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.472126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35388","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.479937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35402","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.486560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35426","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:36:39.531211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35432","server-name":"","error":"EOF"}
==> etcd [ffaa90d8d1d6] <==
{"level":"warn","ts":"2025-12-07T22:37:26.355127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47554","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.361901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47572","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.374406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47586","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.381098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47602","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.387526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47634","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.393930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47644","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.400552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47664","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.413634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47712","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.420320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47714","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.426762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47728","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.446206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47754","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.452747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47768","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.459318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47780","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.465919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47798","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.473623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47820","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.480834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47850","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.488109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47856","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.494566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47870","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.500996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47874","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.508990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47888","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.515518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47892","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.528176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47908","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.534630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47928","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.541094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47950","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:37:26.591786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47974","server-name":"","error":"EOF"}
==> kernel <==
22:43:14 up 1:25, 0 user, load average: 0.36, 0.71, 1.43
Linux functional-304107 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kube-apiserver [dba7457ece93] <==
E1207 22:37:27.036884 1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
I1207 22:37:27.060084 1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
I1207 22:37:27.866960 1 controller.go:667] quota admission added evaluator for: serviceaccounts
I1207 22:37:27.934482 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I1207 22:37:28.446349 1 controller.go:667] quota admission added evaluator for: deployments.apps
I1207 22:37:28.476950 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I1207 22:37:28.496908 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1207 22:37:28.501915 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1207 22:37:30.673361 1 controller.go:667] quota admission added evaluator for: endpoints
I1207 22:37:30.723071 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1207 22:37:30.774827 1 controller.go:667] quota admission added evaluator for: replicasets.apps
I1207 22:37:45.329901 1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.186.167"}
I1207 22:37:49.916091 1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.110.150.78"}
I1207 22:37:51.221772 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.18.24"}
I1207 22:37:56.340090 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.99.189.33"}
I1207 22:38:04.706261 1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.109.11.211"}
E1207 22:38:07.360839 1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38510: use of closed network connection
I1207 22:38:13.777117 1 controller.go:667] quota admission added evaluator for: namespaces
I1207 22:38:13.894232 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.232.143"}
I1207 22:38:13.910016 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.210.158"}
E1207 22:38:21.923762 1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56822: use of closed network connection
E1207 22:38:23.528198 1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56838: use of closed network connection
E1207 22:38:25.404548 1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56864: use of closed network connection
E1207 22:38:26.740772 1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56872: use of closed network connection
E1207 22:38:28.361721 1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56900: use of closed network connection
==> kube-controller-manager [6b3c8ac7211b] <==
I1207 22:37:30.369228 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
I1207 22:37:30.369241 1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
I1207 22:37:30.369196 1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
I1207 22:37:30.369686 1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
I1207 22:37:30.370622 1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
I1207 22:37:30.370659 1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
I1207 22:37:30.370733 1 shared_informer.go:356] "Caches are synced" controller="attach detach"
I1207 22:37:30.370765 1 shared_informer.go:356] "Caches are synced" controller="TTL"
I1207 22:37:30.370844 1 shared_informer.go:356] "Caches are synced" controller="GC"
I1207 22:37:30.373761 1 shared_informer.go:356] "Caches are synced" controller="disruption"
I1207 22:37:30.375129 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1207 22:37:30.393381 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1207 22:37:30.394427 1 shared_informer.go:356] "Caches are synced" controller="node"
I1207 22:37:30.394498 1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
I1207 22:37:30.394555 1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
I1207 22:37:30.394566 1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
I1207 22:37:30.394574 1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
I1207 22:37:30.396712 1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
I1207 22:37:30.399001 1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
E1207 22:38:13.833053 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1207 22:38:13.835300 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1207 22:38:13.837371 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1207 22:38:13.838381 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1207 22:38:13.841748 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1207 22:38:13.845210 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
==> kube-controller-manager [db300a51b23f] <==
I1207 22:36:43.336585 1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
I1207 22:36:43.336617 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
I1207 22:36:43.338766 1 shared_informer.go:356] "Caches are synced" controller="PV protection"
I1207 22:36:43.340015 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1207 22:36:43.341094 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
I1207 22:36:43.383802 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
I1207 22:36:43.383839 1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
I1207 22:36:43.383919 1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
I1207 22:36:43.383961 1 shared_informer.go:356] "Caches are synced" controller="disruption"
I1207 22:36:43.383975 1 shared_informer.go:356] "Caches are synced" controller="TTL"
I1207 22:36:43.383964 1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
I1207 22:36:43.384410 1 shared_informer.go:356] "Caches are synced" controller="job"
I1207 22:36:43.384457 1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
I1207 22:36:43.384541 1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
I1207 22:36:43.387359 1 shared_informer.go:356] "Caches are synced" controller="taint"
I1207 22:36:43.387454 1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
I1207 22:36:43.387567 1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-304107"
I1207 22:36:43.387662 1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
I1207 22:36:43.388653 1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
I1207 22:36:43.388684 1 shared_informer.go:356] "Caches are synced" controller="namespace"
I1207 22:36:43.390035 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1207 22:36:43.391379 1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
I1207 22:36:43.393631 1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
I1207 22:36:43.395839 1 shared_informer.go:356] "Caches are synced" controller="deployment"
I1207 22:36:43.402128 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
==> kube-proxy [02e2d85da0b9] <==
I1207 22:37:28.370113 1 server_linux.go:53] "Using iptables proxy"
I1207 22:37:28.449841 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1207 22:37:28.550168 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1207 22:37:28.550208 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
E1207 22:37:28.550345 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1207 22:37:28.572003 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1207 22:37:28.572073 1 server_linux.go:132] "Using iptables Proxier"
I1207 22:37:28.577825 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1207 22:37:28.578255 1 server.go:527] "Version info" version="v1.34.2"
I1207 22:37:28.578284 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1207 22:37:28.579584 1 config.go:200] "Starting service config controller"
I1207 22:37:28.579622 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1207 22:37:28.579664 1 config.go:106] "Starting endpoint slice config controller"
I1207 22:37:28.579670 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1207 22:37:28.579656 1 config.go:403] "Starting serviceCIDR config controller"
I1207 22:37:28.579704 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1207 22:37:28.579726 1 config.go:309] "Starting node config controller"
I1207 22:37:28.579744 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1207 22:37:28.579752 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1207 22:37:28.679842 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1207 22:37:28.679852 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1207 22:37:28.679852 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-proxy [e48c781da85a] <==
I1207 22:36:38.113299 1 server_linux.go:53] "Using iptables proxy"
I1207 22:36:38.179965 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
E1207 22:36:39.982337 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"functional-304107\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
I1207 22:36:41.480124 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1207 22:36:41.480169 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
E1207 22:36:41.480273 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1207 22:36:41.504977 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1207 22:36:41.505038 1 server_linux.go:132] "Using iptables Proxier"
I1207 22:36:41.510792 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1207 22:36:41.511101 1 server.go:527] "Version info" version="v1.34.2"
I1207 22:36:41.511117 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1207 22:36:41.512614 1 config.go:403] "Starting serviceCIDR config controller"
I1207 22:36:41.512640 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1207 22:36:41.512650 1 config.go:200] "Starting service config controller"
I1207 22:36:41.512665 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1207 22:36:41.512665 1 config.go:106] "Starting endpoint slice config controller"
I1207 22:36:41.512682 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1207 22:36:41.512683 1 config.go:309] "Starting node config controller"
I1207 22:36:41.512757 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1207 22:36:41.512769 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1207 22:36:41.612864 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1207 22:36:41.612952 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1207 22:36:41.612982 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
==> kube-scheduler [41bbee6a06fd] <==
I1207 22:37:25.847375 1 serving.go:386] Generated self-signed cert in-memory
I1207 22:37:26.976543 1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
I1207 22:37:26.976567 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1207 22:37:26.980525 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1207 22:37:26.980528 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
I1207 22:37:26.980553 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1207 22:37:26.980563 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I1207 22:37:26.980609 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I1207 22:37:26.980552 1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
I1207 22:37:26.980881 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I1207 22:37:26.980967 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I1207 22:37:27.080905 1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
I1207 22:37:27.080928 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1207 22:37:27.080971 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
==> kube-scheduler [41fa6477afc2] <==
I1207 22:36:38.776977 1 serving.go:386] Generated self-signed cert in-memory
W1207 22:36:39.951455 1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1207 22:36:39.951521 1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1207 22:36:39.951534 1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
W1207 22:36:39.951544 1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1207 22:36:39.984100 1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
I1207 22:36:39.984132 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1207 22:36:39.989511 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1207 22:36:39.989570 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1207 22:36:39.990020 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I1207 22:36:39.990114 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I1207 22:36:40.089686 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Dec 07 22:41:14 functional-304107 kubelet[8779]: E1207 22:41:14.499945 8779 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Dec 07 22:41:14 functional-304107 kubelet[8779]: E1207 22:41:14.500006 8779 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Dec 07 22:41:14 functional-304107 kubelet[8779]: E1207 22:41:14.500125 8779 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-fll54_kubernetes-dashboard(d923ff83-4020-47ed-99c2-20a55f686fae): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
Dec 07 22:41:14 functional-304107 kubelet[8779]: E1207 22:41:14.500177 8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fll54" podUID="d923ff83-4020-47ed-99c2-20a55f686fae"
Dec 07 22:41:25 functional-304107 kubelet[8779]: E1207 22:41:25.771307 8779 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Dec 07 22:41:25 functional-304107 kubelet[8779]: E1207 22:41:25.771364 8779 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Dec 07 22:41:25 functional-304107 kubelet[8779]: E1207 22:41:25.771463 8779 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-rgc2w_kubernetes-dashboard(834d5a75-d152-4514-84bb-12983bbb23bc): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
Dec 07 22:41:25 functional-304107 kubelet[8779]: E1207 22:41:25.771504 8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rgc2w" podUID="834d5a75-d152-4514-84bb-12983bbb23bc"
Dec 07 22:41:26 functional-304107 kubelet[8779]: E1207 22:41:26.772444 8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fll54" podUID="d923ff83-4020-47ed-99c2-20a55f686fae"
Dec 07 22:41:37 functional-304107 kubelet[8779]: E1207 22:41:37.772720 8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rgc2w" podUID="834d5a75-d152-4514-84bb-12983bbb23bc"
Dec 07 22:41:41 functional-304107 kubelet[8779]: E1207 22:41:41.771991 8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fll54" podUID="d923ff83-4020-47ed-99c2-20a55f686fae"
Dec 07 22:41:48 functional-304107 kubelet[8779]: E1207 22:41:48.772268 8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rgc2w" podUID="834d5a75-d152-4514-84bb-12983bbb23bc"
Dec 07 22:41:54 functional-304107 kubelet[8779]: E1207 22:41:54.772781 8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fll54" podUID="d923ff83-4020-47ed-99c2-20a55f686fae"
Dec 07 22:42:01 functional-304107 kubelet[8779]: E1207 22:42:01.772815 8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rgc2w" podUID="834d5a75-d152-4514-84bb-12983bbb23bc"
Dec 07 22:42:06 functional-304107 kubelet[8779]: E1207 22:42:06.781026 8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fll54" podUID="d923ff83-4020-47ed-99c2-20a55f686fae"
Dec 07 22:42:15 functional-304107 kubelet[8779]: E1207 22:42:15.773271 8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rgc2w" podUID="834d5a75-d152-4514-84bb-12983bbb23bc"
Dec 07 22:42:20 functional-304107 kubelet[8779]: E1207 22:42:20.772321 8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fll54" podUID="d923ff83-4020-47ed-99c2-20a55f686fae"
Dec 07 22:42:28 functional-304107 kubelet[8779]: E1207 22:42:28.772483 8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rgc2w" podUID="834d5a75-d152-4514-84bb-12983bbb23bc"
Dec 07 22:42:31 functional-304107 kubelet[8779]: E1207 22:42:31.772050 8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fll54" podUID="d923ff83-4020-47ed-99c2-20a55f686fae"
Dec 07 22:42:39 functional-304107 kubelet[8779]: E1207 22:42:39.772229 8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rgc2w" podUID="834d5a75-d152-4514-84bb-12983bbb23bc"
Dec 07 22:42:45 functional-304107 kubelet[8779]: E1207 22:42:45.772822 8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fll54" podUID="d923ff83-4020-47ed-99c2-20a55f686fae"
Dec 07 22:42:51 functional-304107 kubelet[8779]: E1207 22:42:51.772153 8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rgc2w" podUID="834d5a75-d152-4514-84bb-12983bbb23bc"
Dec 07 22:42:56 functional-304107 kubelet[8779]: E1207 22:42:56.772335 8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fll54" podUID="d923ff83-4020-47ed-99c2-20a55f686fae"
Dec 07 22:43:02 functional-304107 kubelet[8779]: E1207 22:43:02.772997 8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rgc2w" podUID="834d5a75-d152-4514-84bb-12983bbb23bc"
Dec 07 22:43:09 functional-304107 kubelet[8779]: E1207 22:43:09.771828 8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fll54" podUID="d923ff83-4020-47ed-99c2-20a55f686fae"
==> storage-provisioner [5e90eed3fbdd] <==
I1207 22:36:54.278472 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1207 22:36:54.285665 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1207 22:36:54.285718 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
W1207 22:36:54.287805 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:36:57.742805 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:37:02.003199 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:37:05.601824 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:37:08.656336 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:37:11.678748 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:37:11.684092 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
I1207 22:37:11.684262 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1207 22:37:11.684436 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2e5a30b4-e0f4-4260-983e-9c1d65d52b48", APIVersion:"v1", ResourceVersion:"551", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-304107_6e671eb7-998e-42e1-9718-e975434a6aa0 became leader
I1207 22:37:11.684457 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-304107_6e671eb7-998e-42e1-9718-e975434a6aa0!
W1207 22:37:11.686526 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:37:11.690802 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
I1207 22:37:11.784680 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-304107_6e671eb7-998e-42e1-9718-e975434a6aa0!
W1207 22:37:13.693702 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:37:13.697861 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:37:15.701701 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:37:15.706362 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:37:17.709053 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:37:17.712903 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
==> storage-provisioner [822f5ff4ed50] <==
W1207 22:42:48.923819 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:42:50.927756 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:42:50.931700 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:42:52.935070 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:42:52.939572 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:42:54.943025 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:42:54.948368 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:42:56.951317 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:42:56.955581 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:42:58.959187 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:42:58.963293 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:43:00.966835 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:43:00.971096 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:43:02.974362 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:43:02.978571 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:43:04.982130 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:43:04.987237 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:43:06.990672 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:43:06.994483 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:43:08.998649 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:43:09.002757 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:43:11.006389 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:43:11.010411 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:43:13.013877 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:43:13.019589 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-304107 -n functional-304107
helpers_test.go:269: (dbg) Run: kubectl --context functional-304107 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount dashboard-metrics-scraper-77bf4d6c4c-fll54 kubernetes-dashboard-855c9754f9-rgc2w
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context functional-304107 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-fll54 kubernetes-dashboard-855c9754f9-rgc2w
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-304107 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-fll54 kubernetes-dashboard-855c9754f9-rgc2w: exit status 1 (69.36626ms)
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-304107/192.168.49.2
Start Time: Sun, 07 Dec 2025 22:38:05 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Succeeded
IP: 10.244.0.13
IPs:
IP: 10.244.0.13
Containers:
mount-munger:
Container ID: docker://bb849dc65ed8ca6261155a53ae4b076ab3b743bdcbc0deadff7660638b8f5e67
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID: docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 07 Dec 2025 22:38:18 +0000
Finished: Sun, 07 Dec 2025 22:38:18 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sjjqx (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-sjjqx:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m9s default-scheduler Successfully assigned default/busybox-mount to functional-304107
Normal Pulling 5m9s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal Pulled 4m56s kubelet Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.979s (12.649s including waiting). Image size: 4403845 bytes.
Normal Created 4m56s kubelet Created container: mount-munger
Normal Started 4m56s kubelet Started container mount-munger
-- /stdout --
** stderr **
Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-fll54" not found
Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-rgc2w" not found
** /stderr **
helpers_test.go:287: kubectl --context functional-304107 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-fll54 kubernetes-dashboard-855c9754f9-rgc2w: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.00s)