=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-799296 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-799296 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-799296 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-799296 --alsologtostderr -v=1] stderr:
I0908 11:11:35.335104 372718 out.go:360] Setting OutFile to fd 1 ...
I0908 11:11:35.335376 372718 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:11:35.335386 372718 out.go:374] Setting ErrFile to fd 2...
I0908 11:11:35.335391 372718 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:11:35.335629 372718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-360138/.minikube/bin
I0908 11:11:35.336001 372718 mustload.go:65] Loading cluster: functional-799296
I0908 11:11:35.336393 372718 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:11:35.336803 372718 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
I0908 11:11:35.336874 372718 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 11:11:35.354589 372718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35945
I0908 11:11:35.355183 372718 main.go:141] libmachine: () Calling .GetVersion
I0908 11:11:35.355825 372718 main.go:141] libmachine: Using API Version 1
I0908 11:11:35.355857 372718 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 11:11:35.356289 372718 main.go:141] libmachine: () Calling .GetMachineName
I0908 11:11:35.356557 372718 main.go:141] libmachine: (functional-799296) Calling .GetState
I0908 11:11:35.358508 372718 host.go:66] Checking if "functional-799296" exists ...
I0908 11:11:35.358952 372718 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
I0908 11:11:35.359001 372718 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 11:11:35.376816 372718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39839
I0908 11:11:35.377262 372718 main.go:141] libmachine: () Calling .GetVersion
I0908 11:11:35.377817 372718 main.go:141] libmachine: Using API Version 1
I0908 11:11:35.377844 372718 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 11:11:35.378215 372718 main.go:141] libmachine: () Calling .GetMachineName
I0908 11:11:35.378438 372718 main.go:141] libmachine: (functional-799296) Calling .DriverName
I0908 11:11:35.378573 372718 api_server.go:166] Checking apiserver status ...
I0908 11:11:35.378648 372718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0908 11:11:35.378691 372718 main.go:141] libmachine: (functional-799296) Calling .GetSSHHostname
I0908 11:11:35.381649 372718 main.go:141] libmachine: (functional-799296) DBG | domain functional-799296 has defined MAC address 52:54:00:58:fa:55 in network mk-functional-799296
I0908 11:11:35.382059 372718 main.go:141] libmachine: (functional-799296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:fa:55", ip: ""} in network mk-functional-799296: {Iface:virbr1 ExpiryTime:2025-09-08 12:08:00 +0000 UTC Type:0 Mac:52:54:00:58:fa:55 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:functional-799296 Clientid:01:52:54:00:58:fa:55}
I0908 11:11:35.382102 372718 main.go:141] libmachine: (functional-799296) DBG | domain functional-799296 has defined IP address 192.168.39.63 and MAC address 52:54:00:58:fa:55 in network mk-functional-799296
I0908 11:11:35.382189 372718 main.go:141] libmachine: (functional-799296) Calling .GetSSHPort
I0908 11:11:35.382421 372718 main.go:141] libmachine: (functional-799296) Calling .GetSSHKeyPath
I0908 11:11:35.382586 372718 main.go:141] libmachine: (functional-799296) Calling .GetSSHUsername
I0908 11:11:35.382752 372718 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21512-360138/.minikube/machines/functional-799296/id_rsa Username:docker}
I0908 11:11:35.480055 372718 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/10112/cgroup
W0908 11:11:35.494207 372718 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/10112/cgroup: Process exited with status 1
stdout:
stderr:
I0908 11:11:35.494277 372718 ssh_runner.go:195] Run: ls
I0908 11:11:35.500715 372718 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8441/healthz ...
I0908 11:11:35.506006 372718 api_server.go:279] https://192.168.39.63:8441/healthz returned 200:
ok
W0908 11:11:35.506060 372718 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0908 11:11:35.506253 372718 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:11:35.506279 372718 addons.go:69] Setting dashboard=true in profile "functional-799296"
I0908 11:11:35.506291 372718 addons.go:238] Setting addon dashboard=true in "functional-799296"
I0908 11:11:35.506319 372718 host.go:66] Checking if "functional-799296" exists ...
I0908 11:11:35.506557 372718 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
I0908 11:11:35.506607 372718 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 11:11:35.523353 372718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40039
I0908 11:11:35.523944 372718 main.go:141] libmachine: () Calling .GetVersion
I0908 11:11:35.524448 372718 main.go:141] libmachine: Using API Version 1
I0908 11:11:35.524473 372718 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 11:11:35.524863 372718 main.go:141] libmachine: () Calling .GetMachineName
I0908 11:11:35.525379 372718 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
I0908 11:11:35.525437 372718 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 11:11:35.541852 372718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36343
I0908 11:11:35.542272 372718 main.go:141] libmachine: () Calling .GetVersion
I0908 11:11:35.542739 372718 main.go:141] libmachine: Using API Version 1
I0908 11:11:35.542768 372718 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 11:11:35.543128 372718 main.go:141] libmachine: () Calling .GetMachineName
I0908 11:11:35.543299 372718 main.go:141] libmachine: (functional-799296) Calling .GetState
I0908 11:11:35.545001 372718 main.go:141] libmachine: (functional-799296) Calling .DriverName
I0908 11:11:35.547485 372718 out.go:179] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0908 11:11:35.549144 372718 out.go:179] - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0908 11:11:35.550397 372718 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0908 11:11:35.550414 372718 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0908 11:11:35.550441 372718 main.go:141] libmachine: (functional-799296) Calling .GetSSHHostname
I0908 11:11:35.553644 372718 main.go:141] libmachine: (functional-799296) DBG | domain functional-799296 has defined MAC address 52:54:00:58:fa:55 in network mk-functional-799296
I0908 11:11:35.554051 372718 main.go:141] libmachine: (functional-799296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:fa:55", ip: ""} in network mk-functional-799296: {Iface:virbr1 ExpiryTime:2025-09-08 12:08:00 +0000 UTC Type:0 Mac:52:54:00:58:fa:55 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:functional-799296 Clientid:01:52:54:00:58:fa:55}
I0908 11:11:35.554087 372718 main.go:141] libmachine: (functional-799296) DBG | domain functional-799296 has defined IP address 192.168.39.63 and MAC address 52:54:00:58:fa:55 in network mk-functional-799296
I0908 11:11:35.554255 372718 main.go:141] libmachine: (functional-799296) Calling .GetSSHPort
I0908 11:11:35.554485 372718 main.go:141] libmachine: (functional-799296) Calling .GetSSHKeyPath
I0908 11:11:35.554682 372718 main.go:141] libmachine: (functional-799296) Calling .GetSSHUsername
I0908 11:11:35.554848 372718 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21512-360138/.minikube/machines/functional-799296/id_rsa Username:docker}
I0908 11:11:35.650834 372718 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0908 11:11:35.650869 372718 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0908 11:11:35.674352 372718 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0908 11:11:35.674387 372718 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0908 11:11:35.698687 372718 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0908 11:11:35.698718 372718 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0908 11:11:35.721254 372718 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0908 11:11:35.721281 372718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0908 11:11:35.744089 372718 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0908 11:11:35.744123 372718 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0908 11:11:35.767365 372718 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0908 11:11:35.767400 372718 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0908 11:11:35.789824 372718 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0908 11:11:35.789856 372718 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0908 11:11:35.813285 372718 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0908 11:11:35.813312 372718 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0908 11:11:35.839297 372718 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0908 11:11:35.839325 372718 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0908 11:11:35.865635 372718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0908 11:11:36.853488 372718 main.go:141] libmachine: Making call to close driver server
I0908 11:11:36.853529 372718 main.go:141] libmachine: (functional-799296) Calling .Close
I0908 11:11:36.854002 372718 main.go:141] libmachine: Successfully made call to close driver server
I0908 11:11:36.854022 372718 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 11:11:36.854031 372718 main.go:141] libmachine: Making call to close driver server
I0908 11:11:36.854038 372718 main.go:141] libmachine: (functional-799296) Calling .Close
I0908 11:11:36.854299 372718 main.go:141] libmachine: Successfully made call to close driver server
I0908 11:11:36.854314 372718 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 11:11:36.855917 372718 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-799296 addons enable metrics-server
I0908 11:11:36.857358 372718 addons.go:201] Writing out "functional-799296" config to set dashboard=true...
W0908 11:11:36.857599 372718 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0908 11:11:36.858291 372718 kapi.go:59] client config for functional-799296: &rest.Config{Host:"https://192.168.39.63:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt", KeyFile:"/home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.key", CAFile:"/home/jenkins/minikube-integration/21512-360138/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0908 11:11:36.858762 372718 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0908 11:11:36.858782 372718 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0908 11:11:36.858786 372718 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0908 11:11:36.858792 372718 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0908 11:11:36.858799 372718 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0908 11:11:36.874268 372718 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard 0d1faebe-58e7-48ae-899b-789c325ea834 865 0 2025-09-08 11:11:36 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-08 11:11:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.106.225.130,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.106.225.130],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0908 11:11:36.874479 372718 out.go:285] * Launching proxy ...
* Launching proxy ...
I0908 11:11:36.874580 372718 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-799296 proxy --port 36195]
I0908 11:11:36.874947 372718 dashboard.go:157] Waiting for kubectl to output host:port ...
I0908 11:11:36.923818 372718 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0908 11:11:36.923857 372718 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0908 11:11:36.944172 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a0c8c7c6-42d9-407b-9c6a-d9dfd1115650] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:36 GMT]] Body:0xc000251480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000254a00 TLS:<nil>}
I0908 11:11:36.944270 372718 retry.go:31] will retry after 75.817µs: Temporary Error: unexpected response code: 503
I0908 11:11:36.948009 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f3e5a096-543d-417a-b85a-e2257a129c03] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:36 GMT]] Body:0xc0006a1dc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c6000 TLS:<nil>}
I0908 11:11:36.948081 372718 retry.go:31] will retry after 153.92µs: Temporary Error: unexpected response code: 503
I0908 11:11:36.957545 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[57ff71fa-876f-459c-83bc-6bee7625ea28] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:36 GMT]] Body:0xc000944000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000254b40 TLS:<nil>}
I0908 11:11:36.957625 372718 retry.go:31] will retry after 316.897µs: Temporary Error: unexpected response code: 503
I0908 11:11:36.963453 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8914a869-1c91-450b-9397-195c82c40a23] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:36 GMT]] Body:0xc0006a1ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c6140 TLS:<nil>}
I0908 11:11:36.963526 372718 retry.go:31] will retry after 455.346µs: Temporary Error: unexpected response code: 503
I0908 11:11:36.973948 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a08e0027-743a-4696-91a4-e0b892d7e73a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:36 GMT]] Body:0xc0009a4a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000254c80 TLS:<nil>}
I0908 11:11:36.974027 372718 retry.go:31] will retry after 751.4µs: Temporary Error: unexpected response code: 503
I0908 11:11:36.984903 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5d21a8a6-d59b-42b4-b304-ea78e3a48893] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:36 GMT]] Body:0xc000944140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000254f00 TLS:<nil>}
I0908 11:11:36.984998 372718 retry.go:31] will retry after 899.842µs: Temporary Error: unexpected response code: 503
I0908 11:11:36.990164 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[161c4b8c-9645-4342-8927-39069f5c6449] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:36 GMT]] Body:0xc0009a4b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c6280 TLS:<nil>}
I0908 11:11:36.990259 372718 retry.go:31] will retry after 848.699µs: Temporary Error: unexpected response code: 503
I0908 11:11:36.995703 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[96b32e4e-2a85-4582-9a7e-0aaaf9f718ff] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:36 GMT]] Body:0xc000b63900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000255040 TLS:<nil>}
I0908 11:11:36.995785 372718 retry.go:31] will retry after 2.534358ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.016207 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[be9518ff-8847-4f49-a327-f42f08597b91] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc0009a4f80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166e3c0 TLS:<nil>}
I0908 11:11:37.016289 372718 retry.go:31] will retry after 2.301065ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.025565 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b37dd137-3754-4325-981d-f9dc441ec3a0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc000944240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000255180 TLS:<nil>}
I0908 11:11:37.025632 372718 retry.go:31] will retry after 2.477381ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.037666 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[efc20365-8d46-4c2d-aef2-02c965455437] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc000b63a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c63c0 TLS:<nil>}
I0908 11:11:37.037740 372718 retry.go:31] will retry after 6.290045ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.051940 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f39203b2-28b5-43f4-82ab-cf96cc7b12cc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc0009a5080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166e500 TLS:<nil>}
I0908 11:11:37.052025 372718 retry.go:31] will retry after 5.984699ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.064416 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0a34184e-1469-46bf-9474-d261d9712b13] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc000b63ac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002552c0 TLS:<nil>}
I0908 11:11:37.064498 372718 retry.go:31] will retry after 17.26476ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.087899 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[165b1b7f-7d41-4af5-be6e-748039c8af0f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc0009a5200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166e640 TLS:<nil>}
I0908 11:11:37.087986 372718 retry.go:31] will retry after 25.109024ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.119040 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2ce101df-ad53-463e-b756-7db653612d56] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc000944380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000255400 TLS:<nil>}
I0908 11:11:37.119154 372718 retry.go:31] will retry after 27.447501ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.154594 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[edb61309-d8b2-48bb-8975-30d331952f8e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc000b63bc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c6500 TLS:<nil>}
I0908 11:11:37.154708 372718 retry.go:31] will retry after 31.663509ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.191078 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cfc2e302-46fa-4d41-a73f-fe14d180c003] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc000944480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166e780 TLS:<nil>}
I0908 11:11:37.191177 372718 retry.go:31] will retry after 94.549552ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.297218 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3f87e9e7-a5a6-4212-947e-2830adce795b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc000b63cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c6640 TLS:<nil>}
I0908 11:11:37.297291 372718 retry.go:31] will retry after 123.169227ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.424537 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d148a078-9298-41e5-95d5-12e26eee1105] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc000b63d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166e8c0 TLS:<nil>}
I0908 11:11:37.424606 372718 retry.go:31] will retry after 97.82893ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.526339 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[734de8eb-173c-4985-8219-452253e07819] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc000b63e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166ea00 TLS:<nil>}
I0908 11:11:37.526409 372718 retry.go:31] will retry after 132.291895ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.663790 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4d10e295-5842-489d-b41d-21b74862ee17] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc000b63f40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166eb40 TLS:<nil>}
I0908 11:11:37.663865 372718 retry.go:31] will retry after 472.21272ms: Temporary Error: unexpected response code: 503
I0908 11:11:38.143539 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ee2c2fe3-4e02-4deb-95cb-81db3919d1da] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:38 GMT]] Body:0xc0009a5380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166ec80 TLS:<nil>}
I0908 11:11:38.143647 372718 retry.go:31] will retry after 574.282313ms: Temporary Error: unexpected response code: 503
I0908 11:11:38.722050 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1b9db4a8-6de3-4d54-9ec8-8471154363ce] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:38 GMT]] Body:0xc000944600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000255540 TLS:<nil>}
I0908 11:11:38.722129 372718 retry.go:31] will retry after 548.130911ms: Temporary Error: unexpected response code: 503
I0908 11:11:39.274672 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a9956e46-a67d-422d-9c14-fe09a65be669] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:39 GMT]] Body:0xc000a3e0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c6780 TLS:<nil>}
I0908 11:11:39.274776 372718 retry.go:31] will retry after 652.67111ms: Temporary Error: unexpected response code: 503
I0908 11:11:39.932679 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7896bcde-2ed1-41de-80bb-42f54569ad84] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:39 GMT]] Body:0xc0009a54c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166edc0 TLS:<nil>}
I0908 11:11:39.932783 372718 retry.go:31] will retry after 1.108670248s: Temporary Error: unexpected response code: 503
I0908 11:11:41.046567 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b4df7fef-1270-473b-9e5d-344c205818a7] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:41 GMT]] Body:0xc000944780 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c68c0 TLS:<nil>}
I0908 11:11:41.046672 372718 retry.go:31] will retry after 2.561254959s: Temporary Error: unexpected response code: 503
I0908 11:11:43.615665 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d3eb623a-af6e-4cde-9039-2ddcd1c94975] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:43 GMT]] Body:0xc0009a55c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c6a00 TLS:<nil>}
I0908 11:11:43.615748 372718 retry.go:31] will retry after 4.259787307s: Temporary Error: unexpected response code: 503
I0908 11:11:47.879540 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bffc825d-021a-45f2-8936-820a3520e916] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:47 GMT]] Body:0xc0009a5680 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000255680 TLS:<nil>}
I0908 11:11:47.879622 372718 retry.go:31] will retry after 5.012788371s: Temporary Error: unexpected response code: 503
I0908 11:11:52.898765 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[84f8f7e6-5caf-4cc4-bccd-cda36aa61aff] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:52 GMT]] Body:0xc000944880 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166ef00 TLS:<nil>}
I0908 11:11:52.898865 372718 retry.go:31] will retry after 4.333629776s: Temporary Error: unexpected response code: 503
I0908 11:11:57.239607 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f5efddca-f342-47d9-8031-4beb7a6bf657] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:57 GMT]] Body:0xc0009a5700 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166f040 TLS:<nil>}
I0908 11:11:57.239704 372718 retry.go:31] will retry after 9.108883573s: Temporary Error: unexpected response code: 503
I0908 11:12:06.352893 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d5a0bc37-6337-4b62-bfa0-72ff56b769cc] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:12:06 GMT]] Body:0xc000a3ec00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c6b40 TLS:<nil>}
I0908 11:12:06.352991 372718 retry.go:31] will retry after 17.488649229s: Temporary Error: unexpected response code: 503
I0908 11:12:23.848175 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[acd6d977-3961-4a01-a57e-e7cbafce30cf] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:12:23 GMT]] Body:0xc000a3ec80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002557c0 TLS:<nil>}
I0908 11:12:23.848262 372718 retry.go:31] will retry after 32.352203899s: Temporary Error: unexpected response code: 503
I0908 11:12:56.204118 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[da911960-7a47-4c06-8a4b-8bdcfccb20ec] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:12:56 GMT]] Body:0xc000944980 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166f180 TLS:<nil>}
I0908 11:12:56.204226 372718 retry.go:31] will retry after 48.210112898s: Temporary Error: unexpected response code: 503
I0908 11:13:44.422502 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[81d43349-564a-4bf2-8eca-1db2f3e93278] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:13:44 GMT]] Body:0xc0009a4ac0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166e140 TLS:<nil>}
I0908 11:13:44.422603 372718 retry.go:31] will retry after 53.697607322s: Temporary Error: unexpected response code: 503
I0908 11:14:38.124846 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7062205b-2cbc-4957-827f-7e241b51ef53] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:14:38 GMT]] Body:0xc00057ea00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000254140 TLS:<nil>}
I0908 11:14:38.124947 372718 retry.go:31] will retry after 36.612596234s: Temporary Error: unexpected response code: 503
I0908 11:15:14.742050 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[424ca690-f465-4982-9f4c-a2b10aed5fff] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:15:14 GMT]] Body:0xc0009a4b80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166f2c0 TLS:<nil>}
I0908 11:15:14.742239 372718 retry.go:31] will retry after 31.829625288s: Temporary Error: unexpected response code: 503
I0908 11:15:46.575870 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0a6551e3-6f8d-48d8-b2d4-06ff855c7226] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:15:46 GMT]] Body:0xc0009a4ac0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000254280 TLS:<nil>}
I0908 11:15:46.575962 372718 retry.go:31] will retry after 39.064713825s: Temporary Error: unexpected response code: 503
I0908 11:16:25.645419 372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[beffe0ba-3072-418c-9e75-b0a53089038f] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:16:25 GMT]] Body:0xc000a3e480 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166e000 TLS:<nil>}
I0908 11:16:25.645512 372718 retry.go:31] will retry after 1m17.567564981s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-799296 -n functional-799296
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p functional-799296 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-799296 logs -n 25: (1.103945163s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
==> Audit <==
┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ functional-799296 ssh -n functional-799296 sudo cat /home/docker/cp-test.txt │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
│ cp │ functional-799296 cp functional-799296:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2557956061/001/cp-test.txt │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
│ ssh │ functional-799296 ssh -n functional-799296 sudo cat /home/docker/cp-test.txt │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
│ cp │ functional-799296 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
│ ssh │ functional-799296 ssh -n functional-799296 sudo cat /tmp/does/not/exist/cp-test.txt │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
│ start │ -p functional-799296 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ │
│ start │ -p functional-799296 --dry-run --alsologtostderr -v=1 --driver=kvm2 │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ │
│ ssh │ functional-799296 ssh echo hello │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
│ ssh │ functional-799296 ssh cat /etc/hostname │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
│ dashboard │ --url --port 36195 -p functional-799296 --alsologtostderr -v=1 │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ │
│ service │ functional-799296 service list │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
│ service │ functional-799296 service list -o json │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
│ service │ functional-799296 service --namespace=default --https --url hello-node │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
│ service │ functional-799296 service hello-node --url --format={{.IP}} │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
│ service │ functional-799296 service hello-node --url │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
│ image │ functional-799296 image ls --format short --alsologtostderr │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
│ image │ functional-799296 image ls --format yaml --alsologtostderr │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
│ ssh │ functional-799296 ssh pgrep buildkitd │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ │
│ image │ functional-799296 image build -t localhost/my-image:functional-799296 testdata/build --alsologtostderr │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
│ image │ functional-799296 image ls │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
│ image │ functional-799296 image ls --format json --alsologtostderr │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
│ image │ functional-799296 image ls --format table --alsologtostderr │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
│ update-context │ functional-799296 update-context --alsologtostderr -v=2 │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
│ update-context │ functional-799296 update-context --alsologtostderr -v=2 │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
│ update-context │ functional-799296 update-context --alsologtostderr -v=2 │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/09/08 11:11:34
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.24.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0908 11:11:34.770029 372629 out.go:360] Setting OutFile to fd 1 ...
I0908 11:11:34.770326 372629 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:11:34.770337 372629 out.go:374] Setting ErrFile to fd 2...
I0908 11:11:34.770343 372629 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:11:34.770556 372629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-360138/.minikube/bin
I0908 11:11:34.771311 372629 out.go:368] Setting JSON to false
I0908 11:11:34.772769 372629 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3240,"bootTime":1757326655,"procs":279,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0908 11:11:34.772896 372629 start.go:140] virtualization: kvm guest
I0908 11:11:34.775116 372629 out.go:179] * [functional-799296] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
I0908 11:11:34.777118 372629 out.go:179] - MINIKUBE_LOCATION=21512
I0908 11:11:34.777144 372629 notify.go:220] Checking for updates...
I0908 11:11:34.780199 372629 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0908 11:11:34.781673 372629 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21512-360138/kubeconfig
I0908 11:11:34.783184 372629 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-360138/.minikube
I0908 11:11:34.784823 372629 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I0908 11:11:34.786318 372629 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I0908 11:11:34.788394 372629 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:11:34.788892 372629 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
I0908 11:11:34.788991 372629 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 11:11:34.806214 372629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40839
I0908 11:11:34.806900 372629 main.go:141] libmachine: () Calling .GetVersion
I0908 11:11:34.807525 372629 main.go:141] libmachine: Using API Version 1
I0908 11:11:34.807543 372629 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 11:11:34.808041 372629 main.go:141] libmachine: () Calling .GetMachineName
I0908 11:11:34.808244 372629 main.go:141] libmachine: (functional-799296) Calling .DriverName
I0908 11:11:34.808526 372629 driver.go:421] Setting default libvirt URI to qemu:///system
I0908 11:11:34.808858 372629 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
I0908 11:11:34.808913 372629 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 11:11:34.825386 372629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42697
I0908 11:11:34.825932 372629 main.go:141] libmachine: () Calling .GetVersion
I0908 11:11:34.826409 372629 main.go:141] libmachine: Using API Version 1
I0908 11:11:34.826443 372629 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 11:11:34.826886 372629 main.go:141] libmachine: () Calling .GetMachineName
I0908 11:11:34.827109 372629 main.go:141] libmachine: (functional-799296) Calling .DriverName
I0908 11:11:34.863939 372629 out.go:179] * Using the kvm2 driver based on existing profile
I0908 11:11:34.865329 372629 start.go:304] selected driver: kvm2
I0908 11:11:34.865348 372629 start.go:918] validating driver "kvm2" against &{Name:functional-799296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:functional-799296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.63 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0908 11:11:34.865486 372629 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0908 11:11:34.866509 372629 cni.go:84] Creating CNI manager for ""
I0908 11:11:34.866565 372629 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0908 11:11:34.866620 372629 start.go:348] cluster config:
{Name:functional-799296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-799296 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.63 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0908 11:11:34.868261 372629 out.go:179] * dry-run validation complete!
==> Docker <==
Sep 08 11:11:52 functional-799296 cri-dockerd[8537]: time="2025-09-08T11:11:52Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Pulling from kubernetesui/metrics-scraper"
Sep 08 11:11:52 functional-799296 cri-dockerd[8537]: time="2025-09-08T11:11:52Z" level=error msg="error collecting stats for container 'kube-apiserver': Error response from daemon: No such container: df39d7e3c156915d45768a83a7090185a60ac983d70afb870654c18a67cd5559"
Sep 08 11:11:53 functional-799296 dockerd[7653]: time="2025-09-08T11:11:53.503203138Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Sep 08 11:11:53 functional-799296 dockerd[7653]: time="2025-09-08T11:11:53.904296453Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 11:12:02 functional-799296 cri-dockerd[8537]: time="2025-09-08T11:12:02Z" level=error msg="error getting RW layer size for container ID 'df39d7e3c156915d45768a83a7090185a60ac983d70afb870654c18a67cd5559': Error response from daemon: No such container: df39d7e3c156915d45768a83a7090185a60ac983d70afb870654c18a67cd5559"
Sep 08 11:12:02 functional-799296 cri-dockerd[8537]: time="2025-09-08T11:12:02Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'df39d7e3c156915d45768a83a7090185a60ac983d70afb870654c18a67cd5559'"
Sep 08 11:12:09 functional-799296 dockerd[7653]: time="2025-09-08T11:12:09.184879445Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 11:12:14 functional-799296 dockerd[7653]: time="2025-09-08T11:12:14.163113354Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 11:12:15 functional-799296 dockerd[7653]: time="2025-09-08T11:12:15.505099269Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Sep 08 11:12:15 functional-799296 dockerd[7653]: time="2025-09-08T11:12:15.907556845Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 11:12:23 functional-799296 dockerd[7653]: time="2025-09-08T11:12:23.516199292Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Sep 08 11:12:23 functional-799296 dockerd[7653]: time="2025-09-08T11:12:23.920186221Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 11:12:56 functional-799296 dockerd[7653]: time="2025-09-08T11:12:56.168763802Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 11:12:57 functional-799296 dockerd[7653]: time="2025-09-08T11:12:57.131125362Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 11:13:04 functional-799296 dockerd[7653]: time="2025-09-08T11:13:04.547498819Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Sep 08 11:13:04 functional-799296 dockerd[7653]: time="2025-09-08T11:13:04.953208912Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 11:13:07 functional-799296 dockerd[7653]: time="2025-09-08T11:13:07.500732587Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Sep 08 11:13:07 functional-799296 dockerd[7653]: time="2025-09-08T11:13:07.924322001Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 11:14:22 functional-799296 dockerd[7653]: time="2025-09-08T11:14:22.405899966Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 11:14:22 functional-799296 cri-dockerd[8537]: time="2025-09-08T11:14:22Z" level=info msg="Stop pulling image docker.io/mysql:5.7: 5.7: Pulling from library/mysql"
Sep 08 11:14:24 functional-799296 dockerd[7653]: time="2025-09-08T11:14:24.129416999Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 11:14:35 functional-799296 dockerd[7653]: time="2025-09-08T11:14:35.505816861Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Sep 08 11:14:35 functional-799296 dockerd[7653]: time="2025-09-08T11:14:35.902715633Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 08 11:14:38 functional-799296 dockerd[7653]: time="2025-09-08T11:14:38.505499622Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Sep 08 11:14:38 functional-799296 dockerd[7653]: time="2025-09-08T11:14:38.903854978Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
85a061fe28404 kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 5 minutes ago Running echo-server 0 5477cbca6c431 hello-node-75c85bcc94-hpxnn
c0b1dc9ecbc3d gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 5 minutes ago Exited mount-munger 0 23b66ffbe7bcc busybox-mount
991dc27df9488 kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 5 minutes ago Running echo-server 0 bd82e1912f29b hello-node-connect-7d85dfc575-z44vz
46e16f741b0a8 52546a367cc9e 5 minutes ago Running coredns 3 6bc5cdc5ab0c5 coredns-66bc5c9577-jgsmm
fe4a8982187eb 6e38f40d628db 5 minutes ago Running storage-provisioner 3 06d5d5b14338d storage-provisioner
2b2df4438da81 df0860106674d 5 minutes ago Running kube-proxy 3 8677445b2febe kube-proxy-4vghz
0d893b24e3bfe a0af72f2ec6d6 5 minutes ago Running kube-controller-manager 3 e442b985ee5af kube-controller-manager-functional-799296
fc8fecb4cc17d 90550c43ad2bc 5 minutes ago Running kube-apiserver 0 915b70b583616 kube-apiserver-functional-799296
8c478ed91b786 5f1f5298c888d 5 minutes ago Running etcd 3 1dadbf1bf582d etcd-functional-799296
3e020aa535204 46169d968e920 5 minutes ago Running kube-scheduler 4 fa14ee4a8ddc3 kube-scheduler-functional-799296
98555532e7d99 46169d968e920 5 minutes ago Exited kube-scheduler 3 554dbe054cfea kube-scheduler-functional-799296
48239ff88be42 a0af72f2ec6d6 5 minutes ago Exited kube-controller-manager 2 b84ea368d516e kube-controller-manager-functional-799296
a3aded15ab5cd df0860106674d 5 minutes ago Exited kube-proxy 2 bdb66340f0ffa kube-proxy-4vghz
8da95a34aad1d 6e38f40d628db 6 minutes ago Exited storage-provisioner 2 548e32b88c739 storage-provisioner
7ebc0a8be557e 52546a367cc9e 6 minutes ago Exited coredns 2 630bd78209478 coredns-66bc5c9577-jgsmm
28f5c5f342b5a 5f1f5298c888d 6 minutes ago Exited etcd 2 85dc20bb58775 etcd-functional-799296
==> coredns [46e16f741b0a] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:47197 - 38538 "HINFO IN 2640664162402965986.7445642495243262507. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.344615448s
==> coredns [7ebc0a8be557] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
.:53
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:40026 - 49666 "HINFO IN 6934977106845304138.5256607027527752237. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.102570572s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: functional-799296
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-799296
kubernetes.io/os=linux
minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
minikube.k8s.io/name=functional-799296
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_09_08T11_08_34_0700
minikube.k8s.io/version=v1.36.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 08 Sep 2025 11:08:30 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-799296
AcquireTime: <unset>
RenewTime: Mon, 08 Sep 2025 11:16:33 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 08 Sep 2025 11:14:40 +0000 Mon, 08 Sep 2025 11:08:27 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 08 Sep 2025 11:14:40 +0000 Mon, 08 Sep 2025 11:08:27 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 08 Sep 2025 11:14:40 +0000 Mon, 08 Sep 2025 11:08:27 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 08 Sep 2025 11:14:40 +0000 Mon, 08 Sep 2025 11:08:38 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.63
Hostname: functional-799296
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4008588Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4008588Ki
pods: 110
System Info:
Machine ID: 9b5d811bb77448bf80c1bfb1571c2de4
System UUID: 9b5d811b-b774-48bf-80c1-bfb1571c2de4
Boot ID: 07459d39-5da9-4917-ab00-ad155ef2fd22
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://28.4.0
Kubelet Version: v1.34.0
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-75c85bcc94-hpxnn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m9s
default hello-node-connect-7d85dfc575-z44vz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m16s
default mysql-5bb876957f-bm5sk 600m (30%) 700m (35%) 512Mi (13%) 700Mi (17%) 5m4s
default sp-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m11s
kube-system coredns-66bc5c9577-jgsmm 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 7m58s
kube-system etcd-functional-799296 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 8m5s
kube-system kube-apiserver-functional-799296 250m (12%) 0 (0%) 0 (0%) 0 (0%) 5m39s
kube-system kube-controller-manager-functional-799296 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m3s
kube-system kube-proxy-4vghz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m58s
kube-system kube-scheduler-functional-799296 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m3s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m56s
kubernetes-dashboard dashboard-metrics-scraper-77bf4d6c4c-vt6p2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m
kubernetes-dashboard kubernetes-dashboard-855c9754f9-656tt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (67%) 700m (35%)
memory 682Mi (17%) 870Mi (22%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 7m55s kube-proxy
Normal Starting 5m37s kube-proxy
Normal Starting 6m40s kube-proxy
Normal NodeHasSufficientMemory 8m3s kubelet Node functional-799296 status is now: NodeHasSufficientMemory
Normal NodeAllocatableEnforced 8m3s kubelet Updated Node Allocatable limit across pods
Normal NodeHasNoDiskPressure 8m3s kubelet Node functional-799296 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m3s kubelet Node functional-799296 status is now: NodeHasSufficientPID
Normal Starting 8m3s kubelet Starting kubelet.
Normal RegisteredNode 7m59s node-controller Node functional-799296 event: Registered Node functional-799296 in Controller
Normal NodeReady 7m58s kubelet Node functional-799296 status is now: NodeReady
Normal NodeHasNoDiskPressure 6m46s (x8 over 6m46s) kubelet Node functional-799296 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 6m46s (x8 over 6m46s) kubelet Node functional-799296 status is now: NodeHasSufficientMemory
Normal Starting 6m46s kubelet Starting kubelet.
Normal NodeHasSufficientPID 6m46s (x7 over 6m46s) kubelet Node functional-799296 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m46s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 6m39s node-controller Node functional-799296 event: Registered Node functional-799296 in Controller
Normal Starting 5m44s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 5m44s (x8 over 5m44s) kubelet Node functional-799296 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m44s (x8 over 5m44s) kubelet Node functional-799296 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m44s (x7 over 5m44s) kubelet Node functional-799296 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m44s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 5m37s node-controller Node functional-799296 event: Registered Node functional-799296 in Controller
==> dmesg <==
[ +0.108325] kauditd_printk_skb: 1 callbacks suppressed
[ +0.113567] kauditd_printk_skb: 373 callbacks suppressed
[ +0.099329] kauditd_printk_skb: 205 callbacks suppressed
[ +0.140732] kauditd_printk_skb: 166 callbacks suppressed
[ +0.069461] kauditd_printk_skb: 12 callbacks suppressed
[ +10.835030] kauditd_printk_skb: 273 callbacks suppressed
[Sep 8 11:09] kauditd_printk_skb: 16 callbacks suppressed
[ +15.176955] kauditd_printk_skb: 18 callbacks suppressed
[ +5.502717] kauditd_printk_skb: 28 callbacks suppressed
[ +0.001483] kauditd_printk_skb: 2 callbacks suppressed
[ +1.201727] kauditd_printk_skb: 353 callbacks suppressed
[ +4.818024] kauditd_printk_skb: 169 callbacks suppressed
[Sep 8 11:10] kauditd_printk_skb: 78 callbacks suppressed
[ +0.173869] kauditd_printk_skb: 5 callbacks suppressed
[ +15.172551] kauditd_printk_skb: 12 callbacks suppressed
[ +5.550566] kauditd_printk_skb: 22 callbacks suppressed
[ +0.112394] kauditd_printk_skb: 420 callbacks suppressed
[ +5.476814] kauditd_printk_skb: 102 callbacks suppressed
[Sep 8 11:11] kauditd_printk_skb: 119 callbacks suppressed
[ +0.630517] kauditd_printk_skb: 97 callbacks suppressed
[ +4.573282] kauditd_printk_skb: 105 callbacks suppressed
[ +3.518957] kauditd_printk_skb: 114 callbacks suppressed
[ +4.393821] kauditd_printk_skb: 50 callbacks suppressed
[ +0.000017] kauditd_printk_skb: 80 callbacks suppressed
[ +1.595612] crun[13335]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
==> etcd [28f5c5f342b5] <==
{"level":"warn","ts":"2025-09-08T11:09:53.392632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42294","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:09:53.402955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42308","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:09:53.411003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42310","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:09:53.421141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42320","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:09:53.431370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42338","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:09:53.440393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42356","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:09:53.513378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42368","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-09-08T11:10:35.620501Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2025-09-08T11:10:35.620714Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-799296","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.63:2380"],"advertise-client-urls":["https://192.168.39.63:2379"]}
{"level":"error","ts":"2025-09-08T11:10:35.620837Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-09-08T11:10:42.623529Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-09-08T11:10:42.623708Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-09-08T11:10:42.623746Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"365d90f3070fcb7b","current-leader-member-id":"365d90f3070fcb7b"}
{"level":"info","ts":"2025-09-08T11:10:42.623848Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
{"level":"info","ts":"2025-09-08T11:10:42.623860Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
{"level":"warn","ts":"2025-09-08T11:10:42.627642Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-09-08T11:10:42.627701Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"error","ts":"2025-09-08T11:10:42.627716Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"warn","ts":"2025-09-08T11:10:42.627774Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.63:2379: use of closed network connection"}
{"level":"warn","ts":"2025-09-08T11:10:42.627784Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.63:2379: use of closed network connection"}
{"level":"error","ts":"2025-09-08T11:10:42.627789Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.63:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-09-08T11:10:42.630783Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.63:2380"}
{"level":"error","ts":"2025-09-08T11:10:42.630863Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.63:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-09-08T11:10:42.630921Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.63:2380"}
{"level":"info","ts":"2025-09-08T11:10:42.630931Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-799296","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.63:2380"],"advertise-client-urls":["https://192.168.39.63:2379"]}
==> etcd [8c478ed91b78] <==
{"level":"warn","ts":"2025-09-08T11:10:55.371641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43874","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.392525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43896","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.433382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43904","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.434607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43932","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.447806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43936","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.461112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43948","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.481382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43962","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.489932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43972","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.503570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43998","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.511719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44008","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.535779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44028","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.545504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44052","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.557051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44080","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.569520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44092","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.581750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44104","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.601578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44108","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.607815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44136","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.629943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44148","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.641538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44156","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.653876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44168","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.681534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44192","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.687758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44222","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.705513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44232","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.715377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44244","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-08T11:10:55.816477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44268","server-name":"","error":"EOF"}
==> kernel <==
11:16:36 up 8 min, 0 users, load average: 0.06, 0.42, 0.31
Linux functional-799296 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 4 13:14:36 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [fc8fecb4cc17] <==
I0908 11:10:56.607698 1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
I0908 11:10:57.362245 1 controller.go:667] quota admission added evaluator for: serviceaccounts
I0908 11:10:57.391601 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0908 11:10:58.722374 1 controller.go:667] quota admission added evaluator for: deployments.apps
I0908 11:10:58.814539 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I0908 11:10:58.860001 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0908 11:10:58.869048 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0908 11:10:59.950878 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0908 11:11:00.254169 1 controller.go:667] quota admission added evaluator for: replicasets.apps
I0908 11:11:00.306747 1 controller.go:667] quota admission added evaluator for: endpoints
I0908 11:11:15.543162 1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.203.138"}
I0908 11:11:20.324262 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.206.43"}
I0908 11:11:28.065971 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.26.100"}
I0908 11:11:32.038141 1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.137.235"}
I0908 11:11:36.337658 1 controller.go:667] quota admission added evaluator for: namespaces
I0908 11:11:36.788168 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.225.130"}
I0908 11:11:36.838104 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.71.160"}
I0908 11:12:09.225521 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 11:12:22.079669 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 11:13:29.636876 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 11:13:31.016508 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 11:14:44.869754 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 11:14:55.565168 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 11:16:11.937898 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0908 11:16:11.940238 1 stats.go:136] "Error getting keys" err="empty key: \"\""
==> kube-controller-manager [0d893b24e3bf] <==
I0908 11:10:59.964533 1 shared_informer.go:356] "Caches are synced" controller="disruption"
I0908 11:10:59.964287 1 shared_informer.go:356] "Caches are synced" controller="stateful set"
I0908 11:10:59.964643 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I0908 11:10:59.967161 1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
I0908 11:10:59.971748 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
I0908 11:10:59.972999 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
I0908 11:10:59.975477 1 shared_informer.go:356] "Caches are synced" controller="expand"
I0908 11:10:59.975425 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
I0908 11:10:59.976862 1 shared_informer.go:356] "Caches are synced" controller="namespace"
I0908 11:10:59.977093 1 shared_informer.go:356] "Caches are synced" controller="endpoint"
I0908 11:10:59.977210 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
I0908 11:10:59.984290 1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
I0908 11:10:59.990150 1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
I0908 11:10:59.992361 1 shared_informer.go:356] "Caches are synced" controller="TTL"
I0908 11:10:59.994798 1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
I0908 11:11:00.001637 1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
I0908 11:11:00.019346 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
E0908 11:11:36.476977 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E0908 11:11:36.497516 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E0908 11:11:36.535118 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E0908 11:11:36.535662 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E0908 11:11:36.549933 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E0908 11:11:36.550659 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E0908 11:11:36.567343 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E0908 11:11:36.567493 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
==> kube-controller-manager [48239ff88be4] <==
I0908 11:10:48.950763 1 serving.go:386] Generated self-signed cert in-memory
I0908 11:10:49.522609 1 controllermanager.go:191] "Starting" version="v1.34.0"
I0908 11:10:49.522646 1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0908 11:10:49.533303 1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
I0908 11:10:49.533348 1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0908 11:10:49.533296 1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0908 11:10:49.533699 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
==> kube-proxy [2b2df4438da8] <==
I0908 11:10:58.562390 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I0908 11:10:58.664361 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I0908 11:10:58.664481 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.63"]
E0908 11:10:58.664592 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0908 11:10:58.781256 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I0908 11:10:58.781560 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0908 11:10:58.781685 1 server_linux.go:132] "Using iptables Proxier"
I0908 11:10:58.797904 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0908 11:10:58.799903 1 server.go:527] "Version info" version="v1.34.0"
I0908 11:10:58.800229 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0908 11:10:58.807635 1 config.go:200] "Starting service config controller"
I0908 11:10:58.807888 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I0908 11:10:58.807928 1 config.go:106] "Starting endpoint slice config controller"
I0908 11:10:58.808008 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I0908 11:10:58.808168 1 config.go:403] "Starting serviceCIDR config controller"
I0908 11:10:58.808230 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I0908 11:10:58.812083 1 config.go:309] "Starting node config controller"
I0908 11:10:58.812114 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I0908 11:10:58.908081 1 shared_informer.go:356] "Caches are synced" controller="service config"
I0908 11:10:58.908290 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I0908 11:10:58.908318 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I0908 11:10:58.913283 1 shared_informer.go:356] "Caches are synced" controller="node config"
==> kube-proxy [a3aded15ab5c] <==
I0908 11:10:48.659680 1 server_linux.go:53] "Using iptables proxy"
I0908 11:10:48.775257 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
E0908 11:10:48.781034 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-799296&limit=500&resourceVersion=0\": dial tcp 192.168.39.63:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E0908 11:10:49.776207 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-799296&limit=500&resourceVersion=0\": dial tcp 192.168.39.63:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
==> kube-scheduler [3e020aa53520] <==
I0908 11:10:56.486742 1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
I0908 11:10:56.486786 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0908 11:10:56.490541 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0908 11:10:56.490612 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0908 11:10:56.491671 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I0908 11:10:56.491994 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
E0908 11:10:56.504869 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E0908 11:10:56.505009 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E0908 11:10:56.508275 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E0908 11:10:56.508756 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E0908 11:10:56.509216 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E0908 11:10:56.509711 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E0908 11:10:56.509806 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E0908 11:10:56.510369 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E0908 11:10:56.510724 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E0908 11:10:56.511724 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E0908 11:10:56.511990 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E0908 11:10:56.512248 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E0908 11:10:56.511239 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E0908 11:10:56.512796 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E0908 11:10:56.513731 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E0908 11:10:56.513992 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E0908 11:10:56.514034 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E0908 11:10:56.514370 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
I0908 11:10:56.590779 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kube-scheduler [98555532e7d9] <==
I0908 11:10:49.688822 1 serving.go:386] Generated self-signed cert in-memory
==> kubelet <==
Sep 08 11:15:10 functional-799296 kubelet[9815]: E0908 11:15:10.301561 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
Sep 08 11:15:17 functional-799296 kubelet[9815]: E0908 11:15:17.302208 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-656tt" podUID="899616a7-67f1-4a7d-b570-23af938cef3f"
Sep 08 11:15:18 functional-799296 kubelet[9815]: E0908 11:15:18.298590 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e057834f-8639-426b-b18b-b92cc9b17156"
Sep 08 11:15:19 functional-799296 kubelet[9815]: E0908 11:15:19.300211 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vt6p2" podUID="a273d9a7-fb0c-4b8d-983d-5281c9c3e63d"
Sep 08 11:15:24 functional-799296 kubelet[9815]: E0908 11:15:24.302611 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
Sep 08 11:15:31 functional-799296 kubelet[9815]: E0908 11:15:31.302405 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vt6p2" podUID="a273d9a7-fb0c-4b8d-983d-5281c9c3e63d"
Sep 08 11:15:32 functional-799296 kubelet[9815]: E0908 11:15:32.301019 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-656tt" podUID="899616a7-67f1-4a7d-b570-23af938cef3f"
Sep 08 11:15:33 functional-799296 kubelet[9815]: E0908 11:15:33.298205 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e057834f-8639-426b-b18b-b92cc9b17156"
Sep 08 11:15:35 functional-799296 kubelet[9815]: E0908 11:15:35.299841 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
Sep 08 11:15:45 functional-799296 kubelet[9815]: E0908 11:15:45.301794 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vt6p2" podUID="a273d9a7-fb0c-4b8d-983d-5281c9c3e63d"
Sep 08 11:15:47 functional-799296 kubelet[9815]: E0908 11:15:47.300342 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-656tt" podUID="899616a7-67f1-4a7d-b570-23af938cef3f"
Sep 08 11:15:48 functional-799296 kubelet[9815]: E0908 11:15:48.298346 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e057834f-8639-426b-b18b-b92cc9b17156"
Sep 08 11:15:50 functional-799296 kubelet[9815]: E0908 11:15:50.301768 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
Sep 08 11:15:59 functional-799296 kubelet[9815]: E0908 11:15:59.300012 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vt6p2" podUID="a273d9a7-fb0c-4b8d-983d-5281c9c3e63d"
Sep 08 11:16:00 functional-799296 kubelet[9815]: E0908 11:16:00.299936 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-656tt" podUID="899616a7-67f1-4a7d-b570-23af938cef3f"
Sep 08 11:16:01 functional-799296 kubelet[9815]: E0908 11:16:01.298372 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e057834f-8639-426b-b18b-b92cc9b17156"
Sep 08 11:16:02 functional-799296 kubelet[9815]: E0908 11:16:02.300820 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
Sep 08 11:16:12 functional-799296 kubelet[9815]: E0908 11:16:12.300521 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-656tt" podUID="899616a7-67f1-4a7d-b570-23af938cef3f"
Sep 08 11:16:13 functional-799296 kubelet[9815]: E0908 11:16:13.298258 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e057834f-8639-426b-b18b-b92cc9b17156"
Sep 08 11:16:13 functional-799296 kubelet[9815]: E0908 11:16:13.303429 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vt6p2" podUID="a273d9a7-fb0c-4b8d-983d-5281c9c3e63d"
Sep 08 11:16:14 functional-799296 kubelet[9815]: E0908 11:16:14.301564 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
Sep 08 11:16:24 functional-799296 kubelet[9815]: E0908 11:16:24.305484 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-656tt" podUID="899616a7-67f1-4a7d-b570-23af938cef3f"
Sep 08 11:16:25 functional-799296 kubelet[9815]: E0908 11:16:25.298405 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e057834f-8639-426b-b18b-b92cc9b17156"
Sep 08 11:16:26 functional-799296 kubelet[9815]: E0908 11:16:26.301324 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
Sep 08 11:16:26 functional-799296 kubelet[9815]: E0908 11:16:26.301706 9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vt6p2" podUID="a273d9a7-fb0c-4b8d-983d-5281c9c3e63d"
==> storage-provisioner [8da95a34aad1] <==
I0908 11:10:08.878220 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0908 11:10:08.888633 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0908 11:10:08.888730 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
W0908 11:10:08.892145 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:10:12.349561 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:10:16.610079 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:10:20.209144 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:10:23.264748 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:10:26.287715 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:10:26.293629 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
I0908 11:10:26.293852 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0908 11:10:26.294105 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-799296_8e479d11-d693-40bb-9c1e-b39710a5c181!
I0908 11:10:26.295179 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3c9d7864-016f-4339-b126-11f104bc2c6b", APIVersion:"v1", ResourceVersion:"556", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-799296_8e479d11-d693-40bb-9c1e-b39710a5c181 became leader
W0908 11:10:26.304031 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:10:26.307788 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
I0908 11:10:26.395277 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-799296_8e479d11-d693-40bb-9c1e-b39710a5c181!
W0908 11:10:28.311332 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:10:28.320750 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:10:30.325001 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:10:30.331391 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:10:32.334794 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:10:32.341394 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:10:34.345571 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:10:34.356004 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
==> storage-provisioner [fe4a8982187e] <==
W0908 11:16:11.886326 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:13.890618 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:13.896350 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:15.900132 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:15.907665 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:17.911591 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:17.922583 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:19.927266 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:19.934042 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:21.938409 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:21.950215 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:23.953921 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:23.959957 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:25.963836 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:25.969755 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:27.974500 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:27.982373 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:29.986710 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:29.997544 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:32.001819 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:32.008527 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:34.012045 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:34.017976 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:36.022675 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0908 11:16:36.032683 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-799296 -n functional-799296
helpers_test.go:269: (dbg) Run: kubectl --context functional-799296 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-bm5sk sp-pod dashboard-metrics-scraper-77bf4d6c4c-vt6p2 kubernetes-dashboard-855c9754f9-656tt
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context functional-799296 describe pod busybox-mount mysql-5bb876957f-bm5sk sp-pod dashboard-metrics-scraper-77bf4d6c4c-vt6p2 kubernetes-dashboard-855c9754f9-656tt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-799296 describe pod busybox-mount mysql-5bb876957f-bm5sk sp-pod dashboard-metrics-scraper-77bf4d6c4c-vt6p2 kubernetes-dashboard-855c9754f9-656tt: exit status 1 (103.489184ms)
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-799296/192.168.39.63
Start Time: Mon, 08 Sep 2025 11:11:21 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Succeeded
IP: 10.244.0.10
IPs:
IP: 10.244.0.10
Containers:
mount-munger:
Container ID: docker://c0b1dc9ecbc3d940b52991b15e1830ada53642504383ccc7d74239f147ce04e9
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID: docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 08 Sep 2025 11:11:24 +0000
Finished: Mon, 08 Sep 2025 11:11:24 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vkd9c (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-vkd9c:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m16s default-scheduler Successfully assigned default/busybox-mount to functional-799296
Normal Pulling 5m16s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal Pulled 5m13s kubelet Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.126s (2.702s including waiting). Image size: 4403845 bytes.
Normal Created 5m13s kubelet Created container: mount-munger
Normal Started 5m13s kubelet Started container mount-munger
Name: mysql-5bb876957f-bm5sk
Namespace: default
Priority: 0
Service Account: default
Node: functional-799296/192.168.39.63
Start Time: Mon, 08 Sep 2025 11:11:32 +0000
Labels: app=mysql
pod-template-hash=5bb876957f
Annotations: <none>
Status: Pending
IP: 10.244.0.13
IPs:
IP: 10.244.0.13
Controlled By: ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:
Image: docker.io/mysql:5.7
Image ID:
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Limits:
cpu: 700m
memory: 700Mi
Requests:
cpu: 600m
memory: 512Mi
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4qc5g (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-4qc5g:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m5s default-scheduler Successfully assigned default/mysql-5bb876957f-bm5sk to functional-799296
Warning Failed 3m41s (x4 over 5m4s) kubelet Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal Pulling 2m16s (x5 over 5m5s) kubelet Pulling image "docker.io/mysql:5.7"
Warning Failed 2m15s (x5 over 5m4s) kubelet Error: ErrImagePull
Warning Failed 2m15s kubelet Failed to pull image "docker.io/mysql:5.7": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 73s (x15 over 5m4s) kubelet Error: ImagePullBackOff
Normal BackOff 11s (x20 over 5m4s) kubelet Back-off pulling image "docker.io/mysql:5.7"
Name: sp-pod
Namespace: default
Priority: 0
Service Account: default
Node: functional-799296/192.168.39.63
Start Time: Mon, 08 Sep 2025 11:11:25 +0000
Labels: test=storage-provisioner
Annotations: <none>
Status: Pending
IP: 10.244.0.11
IPs:
IP: 10.244.0.11
Containers:
myfrontend:
Container ID:
Image: docker.io/nginx
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cr22d (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mypd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: myclaim
ReadOnly: false
kube-api-access-cr22d:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m12s default-scheduler Successfully assigned default/sp-pod to functional-799296
Normal Pulling 2m14s (x5 over 5m11s) kubelet Pulling image "docker.io/nginx"
Warning Failed 2m13s (x5 over 5m10s) kubelet Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 2m13s (x5 over 5m10s) kubelet Error: ErrImagePull
Warning Failed 79s (x15 over 5m10s) kubelet Error: ImagePullBackOff
Normal BackOff 12s (x20 over 5m10s) kubelet Back-off pulling image "docker.io/nginx"
-- /stdout --
** stderr **
Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-vt6p2" not found
Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-656tt" not found
** /stderr **
helpers_test.go:287: kubectl --context functional-799296 describe pod busybox-mount mysql-5bb876957f-bm5sk sp-pod dashboard-metrics-scraper-77bf4d6c4c-vt6p2 kubernetes-dashboard-855c9754f9-656tt: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (301.98s)