=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-140475 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-140475 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-140475 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-140475 --alsologtostderr -v=1] stderr:
I0908 12:35:57.925537 318744 out.go:360] Setting OutFile to fd 1 ...
I0908 12:35:57.927656 318744 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:35:57.927677 318744 out.go:374] Setting ErrFile to fd 2...
I0908 12:35:57.927685 318744 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:35:57.927948 318744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-272936/.minikube/bin
I0908 12:35:57.928263 318744 mustload.go:65] Loading cluster: functional-140475
I0908 12:35:57.928726 318744 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 12:35:57.929175 318744 cli_runner.go:164] Run: docker container inspect functional-140475 --format={{.State.Status}}
I0908 12:35:57.955972 318744 host.go:66] Checking if "functional-140475" exists ...
I0908 12:35:57.956332 318744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0908 12:35:58.047560 318744 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 12:35:58.033782533 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0908 12:35:58.047698 318744 api_server.go:166] Checking apiserver status ...
I0908 12:35:58.047784 318744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0908 12:35:58.047850 318744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
I0908 12:35:58.074638 318744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
I0908 12:35:58.168040 318744 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/9319/cgroup
I0908 12:35:58.177860 318744 api_server.go:182] apiserver freezer: "12:freezer:/docker/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341/kubepods/burstable/pod8ea3c91dd17a62bb92f198d336979d84/264ed758e3516ada447c6424a841ddcd7554b019586869f939d8214171d797e9"
I0908 12:35:58.177972 318744 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341/kubepods/burstable/pod8ea3c91dd17a62bb92f198d336979d84/264ed758e3516ada447c6424a841ddcd7554b019586869f939d8214171d797e9/freezer.state
I0908 12:35:58.195151 318744 api_server.go:204] freezer state: "THAWED"
I0908 12:35:58.195191 318744 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0908 12:35:58.208897 318744 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0908 12:35:58.208934 318744 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0908 12:35:58.209125 318744 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 12:35:58.209149 318744 addons.go:69] Setting dashboard=true in profile "functional-140475"
I0908 12:35:58.209166 318744 addons.go:238] Setting addon dashboard=true in "functional-140475"
I0908 12:35:58.209202 318744 host.go:66] Checking if "functional-140475" exists ...
I0908 12:35:58.209605 318744 cli_runner.go:164] Run: docker container inspect functional-140475 --format={{.State.Status}}
I0908 12:35:58.320481 318744 out.go:179] - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0908 12:35:58.323434 318744 out.go:179] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0908 12:35:58.326248 318744 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0908 12:35:58.326279 318744 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0908 12:35:58.326346 318744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
I0908 12:35:58.401589 318744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
I0908 12:35:58.524015 318744 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0908 12:35:58.524036 318744 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0908 12:35:58.549001 318744 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0908 12:35:58.549058 318744 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0908 12:35:58.588850 318744 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0908 12:35:58.588871 318744 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0908 12:35:58.615997 318744 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0908 12:35:58.616018 318744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0908 12:35:58.642583 318744 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0908 12:35:58.642604 318744 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0908 12:35:58.679927 318744 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0908 12:35:58.679950 318744 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0908 12:35:58.713796 318744 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0908 12:35:58.713822 318744 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0908 12:35:58.748263 318744 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0908 12:35:58.748294 318744 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0908 12:35:58.800232 318744 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0908 12:35:58.800253 318744 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0908 12:35:58.839475 318744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0908 12:35:59.918776 318744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.079253649s)
I0908 12:35:59.921969 318744 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-140475 addons enable metrics-server
I0908 12:35:59.924793 318744 addons.go:201] Writing out "functional-140475" config to set dashboard=true...
W0908 12:35:59.925092 318744 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0908 12:35:59.925823 318744 kapi.go:59] client config for functional-140475: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt", KeyFile:"/home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.key", CAFile:"/home/jenkins/minikube-integration/21508-272936/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f2d7d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0908 12:35:59.926525 318744 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0908 12:35:59.926543 318744 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0908 12:35:59.926549 318744 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0908 12:35:59.926560 318744 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0908 12:35:59.926567 318744 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0908 12:35:59.947647 318744 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard 24b19c89-8a86-490e-8313-c0d41198ef4f 1566 0 2025-09-08 12:35:59 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-08 12:35:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.96.157.142,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.96.157.142],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0908 12:35:59.947835 318744 out.go:285] * Launching proxy ...
* Launching proxy ...
I0908 12:35:59.947969 318744 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-140475 proxy --port 36195]
I0908 12:35:59.948316 318744 dashboard.go:157] Waiting for kubectl to output host:port ...
I0908 12:36:00.083596 318744 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0908 12:36:00.083665 318744 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0908 12:36:00.135513 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3fc450f9-e82d-441f-ae1d-7227cd961e9f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004eeac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e0640 TLS:<nil>}
I0908 12:36:00.135604 318744 retry.go:31] will retry after 62.261µs: Temporary Error: unexpected response code: 503
I0908 12:36:00.142419 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6107dc49-b4d0-4f37-8399-d7cafeaadede] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004eeb40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e0780 TLS:<nil>}
I0908 12:36:00.142492 318744 retry.go:31] will retry after 82.417µs: Temporary Error: unexpected response code: 503
I0908 12:36:00.158868 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5b5540e4-4d96-4b2d-b230-34f381ae04c7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004eec00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e08c0 TLS:<nil>}
I0908 12:36:00.158982 318744 retry.go:31] will retry after 324.394µs: Temporary Error: unexpected response code: 503
I0908 12:36:00.209812 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[24d9d229-0aa1-47b4-bbc6-458598222638] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004eecc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e0a00 TLS:<nil>}
I0908 12:36:00.209883 318744 retry.go:31] will retry after 465.312µs: Temporary Error: unexpected response code: 503
I0908 12:36:00.229080 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0ebe42aa-14fe-4c22-90b6-657a0a227886] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004eed40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e0b40 TLS:<nil>}
I0908 12:36:00.229155 318744 retry.go:31] will retry after 617.56µs: Temporary Error: unexpected response code: 503
I0908 12:36:00.247278 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[07406cc8-84a0-46d4-89c7-5ffb6f1232bc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004eedc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e0c80 TLS:<nil>}
I0908 12:36:00.247346 318744 retry.go:31] will retry after 897.768µs: Temporary Error: unexpected response code: 503
I0908 12:36:00.289073 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9659e23d-f059-4701-8d66-e612e51c73bb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004eee40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e0dc0 TLS:<nil>}
I0908 12:36:00.289139 318744 retry.go:31] will retry after 679.578µs: Temporary Error: unexpected response code: 503
I0908 12:36:00.308538 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2780ea42-d2bb-46e1-9b21-2b0888693e4d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004eef00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e0f00 TLS:<nil>}
I0908 12:36:00.308613 318744 retry.go:31] will retry after 1.943586ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.315171 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[39f1170a-2974-4dfc-af52-1a63bdb360de] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004eef80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e1040 TLS:<nil>}
I0908 12:36:00.315235 318744 retry.go:31] will retry after 2.122617ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.347562 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8f8e4c83-09b7-45df-8c2f-867654a04f2a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004ef000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e1180 TLS:<nil>}
I0908 12:36:00.347636 318744 retry.go:31] will retry after 2.093445ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.365962 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3fb8539c-78cb-4272-be38-794bbd292fa9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004ef080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e12c0 TLS:<nil>}
I0908 12:36:00.366036 318744 retry.go:31] will retry after 3.43882ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.373517 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[17acf533-545c-4783-a92b-5739288be714] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004ef100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e1680 TLS:<nil>}
I0908 12:36:00.373588 318744 retry.go:31] will retry after 12.473955ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.390341 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3c17b1d9-1243-49b3-832f-36f8dbfc23d7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004ef180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e17c0 TLS:<nil>}
I0908 12:36:00.390415 318744 retry.go:31] will retry after 17.830611ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.412393 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f6923f3b-2ab0-407d-a18b-509769027255] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004ef200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e1900 TLS:<nil>}
I0908 12:36:00.412462 318744 retry.go:31] will retry after 23.140427ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.440422 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3061307f-2590-496f-a036-dc98e5b7a21a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004ef280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e1a40 TLS:<nil>}
I0908 12:36:00.440488 318744 retry.go:31] will retry after 37.559769ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.482401 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[28a42824-f775-438e-996c-aae2f971016a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004ef300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e1b80 TLS:<nil>}
I0908 12:36:00.482471 318744 retry.go:31] will retry after 58.118998ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.550084 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ebc372cb-1d74-4eca-af2a-b14c0e69abb7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004ef380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e1cc0 TLS:<nil>}
I0908 12:36:00.550155 318744 retry.go:31] will retry after 63.121924ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.618410 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e70b5e35-907a-4c95-bfdf-dc2d01c11dec] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004ef400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e1e00 TLS:<nil>}
I0908 12:36:00.618474 318744 retry.go:31] will retry after 98.600478ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.720824 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[38fbdbe4-d337-4f56-8444-0e7f83551455] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004ef740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003f0000 TLS:<nil>}
I0908 12:36:00.720886 318744 retry.go:31] will retry after 206.504398ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.931236 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8dc4c038-7d61-4c2d-ab22-24531dcf79e8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004ef800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003f0140 TLS:<nil>}
I0908 12:36:00.931303 318744 retry.go:31] will retry after 278.348629ms: Temporary Error: unexpected response code: 503
I0908 12:36:01.213928 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1cb0c6f8-62a7-4059-a86b-a342ba92adc1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:01 GMT]] Body:0x40004ef880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003f0280 TLS:<nil>}
I0908 12:36:01.213995 318744 retry.go:31] will retry after 263.268583ms: Temporary Error: unexpected response code: 503
I0908 12:36:01.481527 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[20b77f5f-4d03-43e1-8c42-b82545ca3432] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:01 GMT]] Body:0x40004ef980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003f03c0 TLS:<nil>}
I0908 12:36:01.481600 318744 retry.go:31] will retry after 347.367696ms: Temporary Error: unexpected response code: 503
I0908 12:36:01.833344 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f46e8aa8-0516-4a0e-b2ef-cdabd53318f4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:01 GMT]] Body:0x40004efa00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003f0500 TLS:<nil>}
I0908 12:36:01.833408 318744 retry.go:31] will retry after 1.097415118s: Temporary Error: unexpected response code: 503
I0908 12:36:02.934645 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4309747f-9b19-4aaf-ac2b-496a35c9b8f2] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:02 GMT]] Body:0x400089dd80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004383c0 TLS:<nil>}
I0908 12:36:02.934704 318744 retry.go:31] will retry after 1.503159086s: Temporary Error: unexpected response code: 503
I0908 12:36:04.441454 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5c0053e3-7ec3-4bd5-989b-904c7a5fd667] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:04 GMT]] Body:0x400089de00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000438640 TLS:<nil>}
I0908 12:36:04.441525 318744 retry.go:31] will retry after 1.756107322s: Temporary Error: unexpected response code: 503
I0908 12:36:06.200727 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[93be152c-c300-4d87-8fb2-fff201eae5c8] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:06 GMT]] Body:0x400089de80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000438780 TLS:<nil>}
I0908 12:36:06.200791 318744 retry.go:31] will retry after 1.593747047s: Temporary Error: unexpected response code: 503
I0908 12:36:07.798560 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8dd41046-4874-4477-aeab-05d955584918] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:07 GMT]] Body:0x400089df00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004388c0 TLS:<nil>}
I0908 12:36:07.798618 318744 retry.go:31] will retry after 5.025678361s: Temporary Error: unexpected response code: 503
I0908 12:36:12.827481 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[36c86fdf-d5fc-4d58-a94a-4f74ed2609aa] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:12 GMT]] Body:0x40008d20c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003f0640 TLS:<nil>}
I0908 12:36:12.827545 318744 retry.go:31] will retry after 5.010852785s: Temporary Error: unexpected response code: 503
I0908 12:36:17.842378 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aad483cc-bf23-4f4e-a345-92fc5f740321] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:17 GMT]] Body:0x40008f1300 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003f0780 TLS:<nil>}
I0908 12:36:17.842490 318744 retry.go:31] will retry after 7.35670499s: Temporary Error: unexpected response code: 503
I0908 12:36:25.202312 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5fac5fec-b417-4ebd-a901-688a86140019] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:25 GMT]] Body:0x40008f1680 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003f08c0 TLS:<nil>}
I0908 12:36:25.202391 318744 retry.go:31] will retry after 14.180156748s: Temporary Error: unexpected response code: 503
I0908 12:36:39.385756 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[851bd054-4334-4bcf-b06c-3197968c0ac1] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:39 GMT]] Body:0x40008d2200 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000438a00 TLS:<nil>}
I0908 12:36:39.385821 318744 retry.go:31] will retry after 23.748774861s: Temporary Error: unexpected response code: 503
I0908 12:37:03.138375 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[674c074c-b812-47f2-a71d-4fd09cc09b47] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:37:03 GMT]] Body:0x40008d22c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000438b40 TLS:<nil>}
I0908 12:37:03.138438 318744 retry.go:31] will retry after 19.755358128s: Temporary Error: unexpected response code: 503
I0908 12:37:22.898985 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c351c121-7a93-4f86-aec3-d798e387a196] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:37:22 GMT]] Body:0x40008f1800 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003f0a00 TLS:<nil>}
I0908 12:37:22.899048 318744 retry.go:31] will retry after 56.605169321s: Temporary Error: unexpected response code: 503
I0908 12:38:19.508740 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[21fd5687-f757-4e7a-bea7-074477821648] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:38:19 GMT]] Body:0x40008d2080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003f0b40 TLS:<nil>}
I0908 12:38:19.508805 318744 retry.go:31] will retry after 45.817120443s: Temporary Error: unexpected response code: 503
I0908 12:39:05.330911 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[88fb75ee-b83c-49e9-95e4-0e9d96a92932] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:39:05 GMT]] Body:0x40008f1340 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000438140 TLS:<nil>}
I0908 12:39:05.330973 318744 retry.go:31] will retry after 59.112236474s: Temporary Error: unexpected response code: 503
I0908 12:40:04.446293 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ec4e3a88-2e9a-4b3e-85c9-d0ef8cc4aef0] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:40:04 GMT]] Body:0x40008f1340 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003f0c80 TLS:<nil>}
I0908 12:40:04.446362 318744 retry.go:31] will retry after 31.371252517s: Temporary Error: unexpected response code: 503
I0908 12:40:35.822202 318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[40ee2a6b-ba22-4a31-9790-542d269df80e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:40:35 GMT]] Body:0x40008d21c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000438000 TLS:<nil>}
I0908 12:40:35.822266 318744 retry.go:31] will retry after 35.73055928s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect functional-140475
helpers_test.go:243: (dbg) docker inspect functional-140475:
-- stdout --
[
{
"Id": "f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341",
"Created": "2025-09-08T12:22:33.259116131Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 300594,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-09-08T12:22:33.335511126Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
"ResolvConfPath": "/var/lib/docker/containers/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341/hostname",
"HostsPath": "/var/lib/docker/containers/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341/hosts",
"LogPath": "/var/lib/docker/containers/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341-json.log",
"Name": "/functional-140475",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-140475:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-140475",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341",
"LowerDir": "/var/lib/docker/overlay2/d312c914cb3c70debf4b39ba5376f977a50ea3960281d7f5c74cdcd5b6aa7804-init/diff:/var/lib/docker/overlay2/4e9e34582c8fac27b8acdffb5ccaf9d8b30c2dae25a1b3b2b79fa116bc7d16cb/diff",
"MergedDir": "/var/lib/docker/overlay2/d312c914cb3c70debf4b39ba5376f977a50ea3960281d7f5c74cdcd5b6aa7804/merged",
"UpperDir": "/var/lib/docker/overlay2/d312c914cb3c70debf4b39ba5376f977a50ea3960281d7f5c74cdcd5b6aa7804/diff",
"WorkDir": "/var/lib/docker/overlay2/d312c914cb3c70debf4b39ba5376f977a50ea3960281d7f5c74cdcd5b6aa7804/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-140475",
"Source": "/var/lib/docker/volumes/functional-140475/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-140475",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-140475",
"name.minikube.sigs.k8s.io": "functional-140475",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "3ce5a484316b053698273bb74a7bfdcf5e2405e0d4a8e758d9e2edbdb00445ff",
"SandboxKey": "/var/run/docker/netns/3ce5a484316b",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33143"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33144"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33147"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33145"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33146"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-140475": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "de:bd:9a:64:d3:4a",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "c192c78e034e0a71a0e148767e9b0ec7ae14d2f5e09e1cfa298441ea22bbe0e5",
"EndpointID": "b362f57131db43ce06461506a6aa968ca551222e3cb2b1e2a1609968c677929a",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-140475",
"f779030ae61f"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-140475 -n functional-140475
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-arm64 -p functional-140475 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-140475 logs -n 25: (1.142365535s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ image │ functional-140475 image save kicbase/echo-server:functional-140475 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ image │ functional-140475 image rm kicbase/echo-server:functional-140475 --alsologtostderr │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ image │ functional-140475 image ls │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ image │ functional-140475 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ image │ functional-140475 image ls │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ image │ functional-140475 image save --daemon kicbase/echo-server:functional-140475 --alsologtostderr │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ docker-env │ functional-140475 docker-env │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ docker-env │ functional-140475 docker-env │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ ssh │ functional-140475 ssh sudo cat /etc/test/nested/copy/274796/hosts │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ ssh │ functional-140475 ssh sudo cat /etc/ssl/certs/274796.pem │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ ssh │ functional-140475 ssh sudo cat /usr/share/ca-certificates/274796.pem │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ ssh │ functional-140475 ssh sudo cat /etc/ssl/certs/51391683.0 │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ ssh │ functional-140475 ssh sudo cat /etc/ssl/certs/2747962.pem │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ ssh │ functional-140475 ssh sudo cat /usr/share/ca-certificates/2747962.pem │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ ssh │ functional-140475 ssh sudo cat /etc/ssl/certs/3ec20f2e.0 │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ image │ functional-140475 image ls --format short --alsologtostderr │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ update-context │ functional-140475 update-context --alsologtostderr -v=2 │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ ssh │ functional-140475 ssh pgrep buildkitd │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ │
│ image │ functional-140475 image build -t localhost/my-image:functional-140475 testdata/build --alsologtostderr │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ image │ functional-140475 image ls │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ image │ functional-140475 image ls --format yaml --alsologtostderr │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ image │ functional-140475 image ls --format json --alsologtostderr │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ image │ functional-140475 image ls --format table --alsologtostderr │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ update-context │ functional-140475 update-context --alsologtostderr -v=2 │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
│ update-context │ functional-140475 update-context --alsologtostderr -v=2 │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/09/08 12:35:57
Running on machine: ip-172-31-31-251
Binary: Built with gc go1.24.6 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0908 12:35:57.580359 318643 out.go:360] Setting OutFile to fd 1 ...
I0908 12:35:57.580564 318643 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:35:57.580570 318643 out.go:374] Setting ErrFile to fd 2...
I0908 12:35:57.580576 318643 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:35:57.581023 318643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-272936/.minikube/bin
I0908 12:35:57.581649 318643 out.go:368] Setting JSON to false
I0908 12:35:57.583444 318643 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8308,"bootTime":1757326650,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I0908 12:35:57.583539 318643 start.go:140] virtualization:
I0908 12:35:57.586955 318643 out.go:179] * [functional-140475] minikube v1.36.0 on Ubuntu 20.04 (arm64)
I0908 12:35:57.592776 318643 out.go:179] - MINIKUBE_LOCATION=21508
I0908 12:35:57.593090 318643 notify.go:220] Checking for updates...
I0908 12:35:57.605487 318643 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0908 12:35:57.608633 318643 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21508-272936/kubeconfig
I0908 12:35:57.612109 318643 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-272936/.minikube
I0908 12:35:57.615824 318643 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I0908 12:35:57.621034 318643 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I0908 12:35:57.624547 318643 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 12:35:57.625659 318643 driver.go:421] Setting default libvirt URI to qemu:///system
I0908 12:35:57.662370 318643 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
I0908 12:35:57.662493 318643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0908 12:35:57.730048 318643 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 12:35:57.719428123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0908 12:35:57.730160 318643 docker.go:318] overlay module found
I0908 12:35:57.734094 318643 out.go:179] * Using the docker driver based on existing profile
I0908 12:35:57.737112 318643 start.go:304] selected driver: docker
I0908 12:35:57.737131 318643 start.go:918] validating driver "docker" against &{Name:functional-140475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-140475 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0908 12:35:57.737212 318643 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0908 12:35:57.737326 318643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0908 12:35:57.832552 318643 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-08 12:35:57.821197016 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0908 12:35:57.832914 318643 cni.go:84] Creating CNI manager for ""
I0908 12:35:57.832977 318643 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0908 12:35:57.833023 318643 start.go:348] cluster config:
{Name:functional-140475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-140475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0908 12:35:57.836035 318643 out.go:179] * dry-run validation complete!
==> Docker <==
Sep 08 12:36:01 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:36:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ddcf15aa5dddd13ba972b31c1a4235c77114fa9b3a48bc6077b0ca2d843ec8e2/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Sep 08 12:36:01 functional-140475 dockerd[6902]: time="2025-09-08T12:36:01.879829808Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Sep 08 12:36:01 functional-140475 dockerd[6902]: time="2025-09-08T12:36:01.977312118Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 12:36:02 functional-140475 dockerd[6902]: time="2025-09-08T12:36:02.027115497Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Sep 08 12:36:02 functional-140475 dockerd[6902]: time="2025-09-08T12:36:02.117051098Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 12:36:15 functional-140475 dockerd[6902]: time="2025-09-08T12:36:15.288216114Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Sep 08 12:36:15 functional-140475 dockerd[6902]: time="2025-09-08T12:36:15.375911929Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 12:36:18 functional-140475 dockerd[6902]: time="2025-09-08T12:36:18.281107434Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Sep 08 12:36:18 functional-140475 dockerd[6902]: time="2025-09-08T12:36:18.372202815Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 12:36:29 functional-140475 dockerd[6902]: time="2025-09-08T12:36:29.459298453Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 12:36:40 functional-140475 dockerd[6902]: time="2025-09-08T12:36:40.290387165Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Sep 08 12:36:40 functional-140475 dockerd[6902]: time="2025-09-08T12:36:40.377620711Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 12:36:43 functional-140475 dockerd[6902]: time="2025-09-08T12:36:43.280602475Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Sep 08 12:36:43 functional-140475 dockerd[6902]: time="2025-09-08T12:36:43.367565765Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 12:36:48 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:36:48Z" level=info msg="Stop pulling image kicbase/echo-server:latest: Status: Image is up to date for kicbase/echo-server:latest"
Sep 08 12:37:32 functional-140475 dockerd[6902]: time="2025-09-08T12:37:32.294158774Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Sep 08 12:37:32 functional-140475 dockerd[6902]: time="2025-09-08T12:37:32.473499316Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 12:37:32 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:37:32Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Pulling from kubernetesui/metrics-scraper"
Sep 08 12:37:34 functional-140475 dockerd[6902]: time="2025-09-08T12:37:34.286429324Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Sep 08 12:37:34 functional-140475 dockerd[6902]: time="2025-09-08T12:37:34.371865517Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 12:38:56 functional-140475 dockerd[6902]: time="2025-09-08T12:38:56.288414334Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Sep 08 12:38:56 functional-140475 dockerd[6902]: time="2025-09-08T12:38:56.460764238Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 12:38:56 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:38:56Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Pulling from kubernetesui/metrics-scraper"
Sep 08 12:39:01 functional-140475 dockerd[6902]: time="2025-09-08T12:39:01.278904915Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Sep 08 12:39:01 functional-140475 dockerd[6902]: time="2025-09-08T12:39:01.358033408Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
27f8085554fc1 kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 4 minutes ago Running echo-server 0 83e116db12522 hello-node-connect-7d85dfc575-t5bmg
70117b128644e gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 5 minutes ago Exited mount-munger 0 e1729aa9858a3 busybox-mount
6049e4afeaa34 kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 5 minutes ago Running echo-server 0 3b73d8658387f hello-node-75c85bcc94-x22bh
a0f0bf25d6321 nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8 15 minutes ago Running nginx 0 f7c16945d9528 nginx-svc
28f6049f43b21 138784d87c9c5 15 minutes ago Running coredns 2 baa450b79b795 coredns-66bc5c9577-p79xg
ae4f0a6634637 6fc32d66c1411 15 minutes ago Running kube-proxy 3 4aa0c34c1b998 kube-proxy-mtw87
3e56d761c8413 ba04bb24b9575 15 minutes ago Running storage-provisioner 4 5df3eb436cb71 storage-provisioner
8e057b7f41de7 a25f5ef9c34c3 15 minutes ago Running kube-scheduler 3 02257bbd60ea0 kube-scheduler-functional-140475
264ed758e3516 d291939e99406 15 minutes ago Running kube-apiserver 0 a12878798ff69 kube-apiserver-functional-140475
b5a5bff40e315 996be7e86d9b3 15 minutes ago Running kube-controller-manager 3 46117db363397 kube-controller-manager-functional-140475
ca1f2eb2a56e6 a1894772a478e 15 minutes ago Running etcd 2 7876da18c12d9 etcd-functional-140475
207c0c3df856b 996be7e86d9b3 15 minutes ago Created kube-controller-manager 2 ddb7ba696cfb7 kube-controller-manager-functional-140475
152b108a85c33 a25f5ef9c34c3 15 minutes ago Created kube-scheduler 2 0101077a99284 kube-scheduler-functional-140475
4ce992834f477 6fc32d66c1411 15 minutes ago Exited kube-proxy 2 e1c898d52b181 kube-proxy-mtw87
6bd6d37f8ca18 ba04bb24b9575 16 minutes ago Exited storage-provisioner 3 5d4262a965e8d storage-provisioner
e96a79d425559 138784d87c9c5 16 minutes ago Exited coredns 1 624d0c2700e94 coredns-66bc5c9577-p79xg
02ea4d507b878 a1894772a478e 16 minutes ago Exited etcd 1 f043527a68f42 etcd-functional-140475
==> coredns [28f6049f43b2] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.12.1
linux/arm64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:53675 - 53593 "HINFO IN 2100446065665056732.4545956877188492551. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030972055s
==> coredns [e96a79d42555] <==
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.12.1
linux/arm64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:50588 - 59981 "HINFO IN 3063204499314061978.8433951396937313554. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013894801s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: functional-140475
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=functional-140475
kubernetes.io/os=linux
minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba
minikube.k8s.io/name=functional-140475
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_09_08T12_23_00_0700
minikube.k8s.io/version=v1.36.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 08 Sep 2025 12:22:57 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-140475
AcquireTime: <unset>
RenewTime: Mon, 08 Sep 2025 12:40:49 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 08 Sep 2025 12:36:53 +0000 Mon, 08 Sep 2025 12:22:53 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 08 Sep 2025 12:36:53 +0000 Mon, 08 Sep 2025 12:22:53 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 08 Sep 2025 12:36:53 +0000 Mon, 08 Sep 2025 12:22:53 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 08 Sep 2025 12:36:53 +0000 Mon, 08 Sep 2025 12:22:57 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-140475
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022304Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022304Ki
pods: 110
System Info:
Machine ID: 28cc2cd20fe74a02bfe1586146117dde
System UUID: a03639dc-39eb-4af1-8eff-ffc8a710a78a
Boot ID: 3b69f852-7505-47f7-82de-581d66319e23
Kernel Version: 5.15.0-1084-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://28.4.0
Kubelet Version: v1.34.0
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-75c85bcc94-x22bh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
default hello-node-connect-7d85dfc575-t5bmg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15m
default nginx-svc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15m
default sp-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15m
kube-system coredns-66bc5c9577-p79xg 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 17m
kube-system etcd-functional-140475 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 17m
kube-system kube-apiserver-functional-140475 250m (12%) 0 (0%) 0 (0%) 0 (0%) 15m
kube-system kube-controller-manager-functional-140475 200m (10%) 0 (0%) 0 (0%) 0 (0%) 17m
kube-system kube-proxy-mtw87 0 (0%) 0 (0%) 0 (0%) 0 (0%) 17m
kube-system kube-scheduler-functional-140475 100m (5%) 0 (0%) 0 (0%) 0 (0%) 17m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 17m
kubernetes-dashboard dashboard-metrics-scraper-77bf4d6c4c-4kltn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m
kubernetes-dashboard kubernetes-dashboard-855c9754f9-zjscm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%) 0 (0%)
memory 170Mi (2%) 170Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 17m kube-proxy
Normal Starting 15m kube-proxy
Normal Starting 16m kube-proxy
Normal Starting 18m kubelet Starting kubelet.
Warning CgroupV1 18m kubelet cgroup v1 support is in maintenance mode, please migrate to cgroup v2
Normal NodeHasSufficientMemory 18m (x8 over 18m) kubelet Node functional-140475 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 18m (x8 over 18m) kubelet Node functional-140475 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 18m (x7 over 18m) kubelet Node functional-140475 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 18m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 17m kubelet Node functional-140475 status is now: NodeHasSufficientMemory
Warning CgroupV1 17m kubelet cgroup v1 support is in maintenance mode, please migrate to cgroup v2
Normal NodeAllocatableEnforced 17m kubelet Updated Node Allocatable limit across pods
Normal NodeHasNoDiskPressure 17m kubelet Node functional-140475 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 17m kubelet Node functional-140475 status is now: NodeHasSufficientPID
Normal Starting 17m kubelet Starting kubelet.
Normal RegisteredNode 17m node-controller Node functional-140475 event: Registered Node functional-140475 in Controller
Warning ContainerGCFailed 16m kubelet rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Normal RegisteredNode 16m node-controller Node functional-140475 event: Registered Node functional-140475 in Controller
Normal Starting 15m kubelet Starting kubelet.
Warning CgroupV1 15m kubelet cgroup v1 support is in maintenance mode, please migrate to cgroup v2
Normal NodeHasSufficientMemory 15m (x8 over 15m) kubelet Node functional-140475 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 15m (x8 over 15m) kubelet Node functional-140475 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 15m (x7 over 15m) kubelet Node functional-140475 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 15m kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 15m node-controller Node functional-140475 event: Registered Node functional-140475 in Controller
==> dmesg <==
[Sep 8 10:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.014150] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.486895] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.033827] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +0.725700] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.488700] kauditd_printk_skb: 36 callbacks suppressed
[Sep 8 10:40] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Sep 8 11:30] hrtimer: interrupt took 33050655 ns
[Sep 8 12:15] kauditd_printk_skb: 8 callbacks suppressed
[Sep 8 12:35] FS-Cache: Duplicate cookie detected
[ +0.000684] FS-Cache: O-cookie c=00000025 [p=00000002 fl=222 nc=0 na=1]
[ +0.000907] FS-Cache: O-cookie d=00000000f75621f8{9P.session} n=000000002e0501ee
[ +0.001029] FS-Cache: O-key=[10] '34323936393639353436'
[ +0.000727] FS-Cache: N-cookie c=00000026 [p=00000002 fl=2 nc=0 na=1]
[ +0.000883] FS-Cache: N-cookie d=00000000f75621f8{9P.session} n=00000000ccfa13d2
[ +0.001067] FS-Cache: N-key=[10] '34323936393639353436'
==> etcd [02ea4d507b87] <==
{"level":"warn","ts":"2025-09-08T12:24:16.639838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39422","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:24:16.649417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39446","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:24:16.688735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39462","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:24:16.731763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39468","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:24:16.765271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39480","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:24:16.778536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39492","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:24:16.910398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39510","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-09-08T12:24:57.890978Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2025-09-08T12:24:57.891064Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-140475","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
{"level":"error","ts":"2025-09-08T12:24:57.891169Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-09-08T12:24:57.891444Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-09-08T12:25:04.896880Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-09-08T12:25:04.896943Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"warn","ts":"2025-09-08T12:25:04.897086Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-09-08T12:25:04.897172Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"error","ts":"2025-09-08T12:25:04.897205Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-09-08T12:25:04.897262Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
{"level":"info","ts":"2025-09-08T12:25:04.897312Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
{"level":"warn","ts":"2025-09-08T12:25:04.899523Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
{"level":"warn","ts":"2025-09-08T12:25:04.899576Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
{"level":"error","ts":"2025-09-08T12:25:04.899589Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-09-08T12:25:04.902221Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"error","ts":"2025-09-08T12:25:04.902306Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-09-08T12:25:04.902428Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2025-09-08T12:25:04.902507Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-140475","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
==> etcd [ca1f2eb2a56e] <==
{"level":"warn","ts":"2025-09-08T12:25:17.711751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41886","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:25:17.736991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41898","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:25:17.763646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41920","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:25:17.841197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41940","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:25:17.852797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41958","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:25:17.876013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41966","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:25:17.907257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41974","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:25:17.950331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42000","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:25:17.980848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42028","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:25:18.017529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42048","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:25:18.061396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42052","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:25:18.088202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42082","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:25:18.122939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42084","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:25:18.166770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42088","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:25:18.220953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42106","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:25:18.244197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42122","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:25:18.257729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42146","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:25:18.273735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42166","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T12:25:18.337140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42186","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-09-08T12:35:16.771183Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1157}
{"level":"info","ts":"2025-09-08T12:35:16.795010Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1157,"took":"23.469328ms","hash":1845616592,"current-db-size-bytes":3301376,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1523712,"current-db-size-in-use":"1.5 MB"}
{"level":"info","ts":"2025-09-08T12:35:16.795073Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1845616592,"revision":1157,"compact-revision":-1}
{"level":"info","ts":"2025-09-08T12:40:16.775355Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1461}
{"level":"info","ts":"2025-09-08T12:40:16.778328Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1461,"took":"2.429889ms","hash":923434856,"current-db-size-bytes":3301376,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":2330624,"current-db-size-in-use":"2.3 MB"}
{"level":"info","ts":"2025-09-08T12:40:16.778383Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":923434856,"revision":1461,"compact-revision":1157}
==> kernel <==
12:40:59 up 2:23, 0 users, load average: 0.15, 0.45, 0.71
Linux functional-140475 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kube-apiserver [264ed758e351] <==
I0908 12:28:52.414335 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 12:29:25.650405 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 12:29:52.823851 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.39.142"}
I0908 12:30:14.183727 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 12:30:47.334511 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 12:31:43.367778 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 12:32:05.544623 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 12:32:45.391793 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 12:33:29.726018 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 12:34:06.733632 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 12:34:30.956819 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 12:35:19.387854 1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
I0908 12:35:24.827665 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 12:35:52.296396 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 12:35:59.459035 1 controller.go:667] quota admission added evaluator for: namespaces
I0908 12:35:59.852223 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.157.142"}
I0908 12:35:59.897954 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.149.142"}
I0908 12:36:33.153896 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 12:37:03.840767 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 12:37:45.854817 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 12:38:24.845669 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 12:38:49.479578 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 12:39:32.292531 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 12:40:06.437273 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 12:40:34.240173 1 stats.go:136] "Error getting keys" err="empty key: \"\""
==> kube-controller-manager [207c0c3df856] <==
==> kube-controller-manager [b5a5bff40e31] <==
I0908 12:25:22.685677 1 shared_informer.go:356] "Caches are synced" controller="expand"
I0908 12:25:22.686301 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I0908 12:25:22.691882 1 shared_informer.go:356] "Caches are synced" controller="PV protection"
I0908 12:25:22.696116 1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
I0908 12:25:22.698431 1 shared_informer.go:356] "Caches are synced" controller="deployment"
I0908 12:25:22.711012 1 shared_informer.go:356] "Caches are synced" controller="disruption"
I0908 12:25:22.714314 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I0908 12:25:22.723484 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I0908 12:25:22.729486 1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
I0908 12:25:22.729518 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I0908 12:25:22.729912 1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
I0908 12:25:22.730035 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
I0908 12:25:22.729535 1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
I0908 12:25:22.729765 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
I0908 12:25:22.731485 1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
I0908 12:25:22.731720 1 shared_informer.go:356] "Caches are synced" controller="taint"
I0908 12:25:22.734707 1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
I0908 12:25:22.736252 1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-140475"
I0908 12:25:22.737513 1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
E0908 12:35:59.606734 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E0908 12:35:59.617788 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E0908 12:35:59.630008 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E0908 12:35:59.637787 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E0908 12:35:59.642582 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E0908 12:35:59.652929 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
==> kube-proxy [4ce992834f47] <==
I0908 12:25:10.488602 1 server_linux.go:53] "Using iptables proxy"
I0908 12:25:10.606414 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
E0908 12:25:10.607244 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-140475&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
==> kube-proxy [ae4f0a663463] <==
I0908 12:25:20.785069 1 server_linux.go:53] "Using iptables proxy"
I0908 12:25:20.921442 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I0908 12:25:21.025213 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I0908 12:25:21.025245 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
E0908 12:25:21.025320 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0908 12:25:21.179249 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0908 12:25:21.182406 1 server_linux.go:132] "Using iptables Proxier"
I0908 12:25:21.227302 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0908 12:25:21.227578 1 server.go:527] "Version info" version="v1.34.0"
I0908 12:25:21.227593 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0908 12:25:21.229207 1 config.go:200] "Starting service config controller"
I0908 12:25:21.229217 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I0908 12:25:21.229243 1 config.go:106] "Starting endpoint slice config controller"
I0908 12:25:21.229247 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I0908 12:25:21.229258 1 config.go:403] "Starting serviceCIDR config controller"
I0908 12:25:21.229262 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I0908 12:25:21.237957 1 config.go:309] "Starting node config controller"
I0908 12:25:21.237979 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I0908 12:25:21.237987 1 shared_informer.go:356] "Caches are synced" controller="node config"
I0908 12:25:21.330477 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I0908 12:25:21.340787 1 shared_informer.go:356] "Caches are synced" controller="service config"
I0908 12:25:21.340836 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [152b108a85c3] <==
==> kube-scheduler [8e057b7f41de] <==
I0908 12:25:18.036734 1 serving.go:386] Generated self-signed cert in-memory
I0908 12:25:19.475887 1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
I0908 12:25:19.475921 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0908 12:25:19.484805 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
I0908 12:25:19.485031 1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
I0908 12:25:19.485180 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0908 12:25:19.485277 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0908 12:25:19.485375 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0908 12:25:19.485465 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0908 12:25:19.486343 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I0908 12:25:19.487832 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I0908 12:25:19.585623 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0908 12:25:19.585742 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0908 12:25:19.585630 1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
==> kubelet <==
Sep 08 12:39:12 functional-140475 kubelet[8797]: E0908 12:39:12.239385 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zjscm" podUID="6e670da3-937f-40cb-ac27-3943a8ec0fef"
Sep 08 12:39:20 functional-140475 kubelet[8797]: E0908 12:39:20.237173 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
Sep 08 12:39:20 functional-140475 kubelet[8797]: E0908 12:39:20.242262 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4kltn" podUID="6e537e2b-3a5d-4484-99aa-7af460bfbce7"
Sep 08 12:39:24 functional-140475 kubelet[8797]: E0908 12:39:24.247169 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zjscm" podUID="6e670da3-937f-40cb-ac27-3943a8ec0fef"
Sep 08 12:39:32 functional-140475 kubelet[8797]: E0908 12:39:32.237228 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
Sep 08 12:39:32 functional-140475 kubelet[8797]: E0908 12:39:32.241205 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4kltn" podUID="6e537e2b-3a5d-4484-99aa-7af460bfbce7"
Sep 08 12:39:38 functional-140475 kubelet[8797]: E0908 12:39:38.241532 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zjscm" podUID="6e670da3-937f-40cb-ac27-3943a8ec0fef"
Sep 08 12:39:45 functional-140475 kubelet[8797]: E0908 12:39:45.244857 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4kltn" podUID="6e537e2b-3a5d-4484-99aa-7af460bfbce7"
Sep 08 12:39:46 functional-140475 kubelet[8797]: E0908 12:39:46.237271 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
Sep 08 12:39:52 functional-140475 kubelet[8797]: E0908 12:39:52.239804 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zjscm" podUID="6e670da3-937f-40cb-ac27-3943a8ec0fef"
Sep 08 12:39:58 functional-140475 kubelet[8797]: E0908 12:39:58.242392 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4kltn" podUID="6e537e2b-3a5d-4484-99aa-7af460bfbce7"
Sep 08 12:40:01 functional-140475 kubelet[8797]: E0908 12:40:01.237213 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
Sep 08 12:40:05 functional-140475 kubelet[8797]: E0908 12:40:05.239461 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zjscm" podUID="6e670da3-937f-40cb-ac27-3943a8ec0fef"
Sep 08 12:40:12 functional-140475 kubelet[8797]: E0908 12:40:12.239250 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4kltn" podUID="6e537e2b-3a5d-4484-99aa-7af460bfbce7"
Sep 08 12:40:15 functional-140475 kubelet[8797]: E0908 12:40:15.237319 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
Sep 08 12:40:20 functional-140475 kubelet[8797]: E0908 12:40:20.241161 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zjscm" podUID="6e670da3-937f-40cb-ac27-3943a8ec0fef"
Sep 08 12:40:25 functional-140475 kubelet[8797]: E0908 12:40:25.239314 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4kltn" podUID="6e537e2b-3a5d-4484-99aa-7af460bfbce7"
Sep 08 12:40:26 functional-140475 kubelet[8797]: E0908 12:40:26.237273 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
Sep 08 12:40:31 functional-140475 kubelet[8797]: E0908 12:40:31.238843 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zjscm" podUID="6e670da3-937f-40cb-ac27-3943a8ec0fef"
Sep 08 12:40:38 functional-140475 kubelet[8797]: E0908 12:40:38.238014 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
Sep 08 12:40:40 functional-140475 kubelet[8797]: E0908 12:40:40.240390 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4kltn" podUID="6e537e2b-3a5d-4484-99aa-7af460bfbce7"
Sep 08 12:40:42 functional-140475 kubelet[8797]: E0908 12:40:42.241262 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zjscm" podUID="6e670da3-937f-40cb-ac27-3943a8ec0fef"
Sep 08 12:40:50 functional-140475 kubelet[8797]: E0908 12:40:50.247155 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
Sep 08 12:40:53 functional-140475 kubelet[8797]: E0908 12:40:53.238780 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4kltn" podUID="6e537e2b-3a5d-4484-99aa-7af460bfbce7"
Sep 08 12:40:55 functional-140475 kubelet[8797]: E0908 12:40:55.239758 8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zjscm" podUID="6e670da3-937f-40cb-ac27-3943a8ec0fef"
==> storage-provisioner [3e56d761c841] <==
W0908 12:40:34.584336 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:36.587515 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:36.591635 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:38.594616 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:38.599088 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:40.601793 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:40.606810 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:42.609330 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:42.613402 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:44.615964 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:44.621059 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:46.624192 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:46.628475 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:48.631106 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:48.636224 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:50.639229 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:50.643297 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:52.646679 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:52.652197 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:54.654871 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:54.658656 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:56.663449 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:56.668731 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:58.671786 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:40:58.677615 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
==> storage-provisioner [6bd6d37f8ca1] <==
I0908 12:24:39.805610 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0908 12:24:39.817707 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0908 12:24:39.818001 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
W0908 12:24:39.827051 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:24:43.281855 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:24:47.542472 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:24:51.141255 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:24:54.195301 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:24:57.217075 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:24:57.222388 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
I0908 12:24:57.222556 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0908 12:24:57.222806 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-140475_be3659dd-d2e5-49cd-8419-febf302bbd52!
I0908 12:24:57.224364 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6f4e765b-be4a-4c1c-98b1-2642ed77f8a2", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-140475_be3659dd-d2e5-49cd-8419-febf302bbd52 became leader
W0908 12:24:57.228147 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 12:24:57.233779 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
I0908 12:24:57.323828 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-140475_be3659dd-d2e5-49cd-8419-febf302bbd52!
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-140475 -n functional-140475
helpers_test.go:269: (dbg) Run: kubectl --context functional-140475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount sp-pod dashboard-metrics-scraper-77bf4d6c4c-4kltn kubernetes-dashboard-855c9754f9-zjscm
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context functional-140475 describe pod busybox-mount sp-pod dashboard-metrics-scraper-77bf4d6c4c-4kltn kubernetes-dashboard-855c9754f9-zjscm
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-140475 describe pod busybox-mount sp-pod dashboard-metrics-scraper-77bf4d6c4c-4kltn kubernetes-dashboard-855c9754f9-zjscm: exit status 1 (199.921752ms)
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-140475/192.168.49.2
Start Time: Mon, 08 Sep 2025 12:35:49 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Succeeded
IP: 10.244.0.12
IPs:
IP: 10.244.0.12
Containers:
mount-munger:
Container ID: docker://70117b128644e2e4767f5fbbfdc02ceeecd480efe2fdcb53a147bb5f55a75ea6
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID: docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 08 Sep 2025 12:35:52 +0000
Finished: Mon, 08 Sep 2025 12:35:52 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-66vdl (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-66vdl:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m10s default-scheduler Successfully assigned default/busybox-mount to functional-140475
Normal Pulling 5m10s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal Pulled 5m8s kubelet Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.051s (2.051s including waiting). Image size: 3547125 bytes.
Normal Created 5m8s kubelet Created container: mount-munger
Normal Started 5m8s kubelet Started container mount-munger
Name: sp-pod
Namespace: default
Priority: 0
Service Account: default
Node: functional-140475/192.168.49.2
Start Time: Mon, 08 Sep 2025 12:25:49 +0000
Labels: test=storage-provisioner
Annotations: <none>
Status: Pending
IP: 10.244.0.9
IPs:
IP: 10.244.0.9
Containers:
myfrontend:
Container ID:
Image: docker.io/nginx
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h6sml (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mypd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: myclaim
ReadOnly: false
kube-api-access-h6sml:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 15m default-scheduler Successfully assigned default/sp-pod to functional-140475
Warning Failed 13m (x3 over 14m) kubelet Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal Pulling 12m (x5 over 15m) kubelet Pulling image "docker.io/nginx"
Warning Failed 12m (x2 over 15m) kubelet Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 12m (x5 over 15m) kubelet Error: ErrImagePull
Normal BackOff 4m56s (x43 over 15m) kubelet Back-off pulling image "docker.io/nginx"
Warning Failed 4m56s (x43 over 15m) kubelet Error: ImagePullBackOff
-- /stdout --
** stderr **
Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-4kltn" not found
Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-zjscm" not found
** /stderr **
helpers_test.go:287: kubectl --context functional-140475 describe pod busybox-mount sp-pod dashboard-metrics-scraper-77bf4d6c4c-4kltn kubernetes-dashboard-855c9754f9-zjscm: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.27s)