=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-085003 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-085003 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-085003 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-085003 --alsologtostderr -v=1] stderr:
I0929 13:13:20.563487 1170270 out.go:360] Setting OutFile to fd 1 ...
I0929 13:13:20.564762 1170270 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 13:13:20.564779 1170270 out.go:374] Setting ErrFile to fd 2...
I0929 13:13:20.564785 1170270 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 13:13:20.565064 1170270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
I0929 13:13:20.565793 1170270 mustload.go:65] Loading cluster: functional-085003
I0929 13:13:20.566245 1170270 config.go:182] Loaded profile config "functional-085003": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 13:13:20.566706 1170270 cli_runner.go:164] Run: docker container inspect functional-085003 --format={{.State.Status}}
I0929 13:13:20.590590 1170270 host.go:66] Checking if "functional-085003" exists ...
I0929 13:13:20.590919 1170270 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0929 13:13:20.691970 1170270 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-29 13:13:20.680915817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0929 13:13:20.692084 1170270 api_server.go:166] Checking apiserver status ...
I0929 13:13:20.692164 1170270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0929 13:13:20.692207 1170270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-085003
I0929 13:13:20.712368 1170270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33933 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/functional-085003/id_rsa Username:docker}
I0929 13:13:20.831799 1170270 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/9055/cgroup
I0929 13:13:20.845448 1170270 api_server.go:182] apiserver freezer: "12:freezer:/docker/808859ee6cd90c0edf7ef87af5e3d7142ab71f43434ae365a1a794f1193cb199/kubepods/burstable/pod8389b3c5071f04a90f8b816ba5cbd99d/ae17de939d81bee1c5af086b9803f2b620513d471f7d0231817d65c9042e89d6"
I0929 13:13:20.845522 1170270 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/808859ee6cd90c0edf7ef87af5e3d7142ab71f43434ae365a1a794f1193cb199/kubepods/burstable/pod8389b3c5071f04a90f8b816ba5cbd99d/ae17de939d81bee1c5af086b9803f2b620513d471f7d0231817d65c9042e89d6/freezer.state
I0929 13:13:20.855580 1170270 api_server.go:204] freezer state: "THAWED"
I0929 13:13:20.855609 1170270 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0929 13:13:20.864320 1170270 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0929 13:13:20.864355 1170270 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0929 13:13:20.864597 1170270 config.go:182] Loaded profile config "functional-085003": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 13:13:20.864620 1170270 addons.go:69] Setting dashboard=true in profile "functional-085003"
I0929 13:13:20.864628 1170270 addons.go:238] Setting addon dashboard=true in "functional-085003"
I0929 13:13:20.864663 1170270 host.go:66] Checking if "functional-085003" exists ...
I0929 13:13:20.865080 1170270 cli_runner.go:164] Run: docker container inspect functional-085003 --format={{.State.Status}}
I0929 13:13:20.905505 1170270 out.go:179] - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0929 13:13:20.908609 1170270 out.go:179] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0929 13:13:20.911796 1170270 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0929 13:13:20.911822 1170270 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0929 13:13:20.911939 1170270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-085003
I0929 13:13:20.947040 1170270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33933 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/functional-085003/id_rsa Username:docker}
I0929 13:13:21.060083 1170270 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0929 13:13:21.060117 1170270 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0929 13:13:21.080259 1170270 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0929 13:13:21.080286 1170270 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0929 13:13:21.099856 1170270 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0929 13:13:21.099878 1170270 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0929 13:13:21.120878 1170270 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0929 13:13:21.120902 1170270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0929 13:13:21.142168 1170270 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0929 13:13:21.142190 1170270 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0929 13:13:21.162607 1170270 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0929 13:13:21.162630 1170270 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0929 13:13:21.181380 1170270 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0929 13:13:21.181402 1170270 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0929 13:13:21.201688 1170270 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0929 13:13:21.201748 1170270 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0929 13:13:21.222256 1170270 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0929 13:13:21.222278 1170270 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0929 13:13:21.240849 1170270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0929 13:13:22.129460 1170270 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-085003 addons enable metrics-server
I0929 13:13:22.132627 1170270 addons.go:201] Writing out "functional-085003" config to set dashboard=true...
W0929 13:13:22.132942 1170270 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0929 13:13:22.133647 1170270 kapi.go:59] client config for functional-085003: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt", KeyFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.key", CAFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20f8010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0929 13:13:22.134227 1170270 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0929 13:13:22.134267 1170270 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0929 13:13:22.134289 1170270 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0929 13:13:22.134310 1170270 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0929 13:13:22.134333 1170270 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0929 13:13:22.153493 1170270 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard 5b79f336-b7e3-42a9-981b-c53f638bdbb5 953 0 2025-09-29 13:13:22 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-29 13:13:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.110.146.72,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.110.146.72],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0929 13:13:22.153674 1170270 out.go:285] * Launching proxy ...
* Launching proxy ...
I0929 13:13:22.153771 1170270 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-085003 proxy --port 36195]
I0929 13:13:22.155729 1170270 dashboard.go:157] Waiting for kubectl to output host:port ...
I0929 13:13:22.221568 1170270 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0929 13:13:22.221635 1170270 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0929 13:13:22.238543 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4bfa4ba5-e1e5-4b87-a05c-9e8076d87fde] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x4000718140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004ab2c0 TLS:<nil>}
I0929 13:13:22.238623 1170270 retry.go:31] will retry after 88.427µs: Temporary Error: unexpected response code: 503
I0929 13:13:22.242615 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8bb2dd3c-87b4-44d2-b75b-d2a9d0389552] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x40007181c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004ab400 TLS:<nil>}
I0929 13:13:22.242681 1170270 retry.go:31] will retry after 134.514µs: Temporary Error: unexpected response code: 503
I0929 13:13:22.246453 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[436ed7cd-4735-49e0-bdc6-4d70346c76d2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x40007cd800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000454140 TLS:<nil>}
I0929 13:13:22.246533 1170270 retry.go:31] will retry after 325.002µs: Temporary Error: unexpected response code: 503
I0929 13:13:22.250391 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9b2bde31-3b8a-418d-b519-8f0e388bb30e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x40007182c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004ab540 TLS:<nil>}
I0929 13:13:22.250449 1170270 retry.go:31] will retry after 353.949µs: Temporary Error: unexpected response code: 503
I0929 13:13:22.254480 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ac91bc3d-d7b6-431d-ab3e-e286da754239] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x4000718340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004ab680 TLS:<nil>}
I0929 13:13:22.254538 1170270 retry.go:31] will retry after 754.783µs: Temporary Error: unexpected response code: 503
I0929 13:13:22.267754 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[47af96ca-2d8c-4d19-b362-b6432294b917] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x40007cda80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000454280 TLS:<nil>}
I0929 13:13:22.267821 1170270 retry.go:31] will retry after 623.521µs: Temporary Error: unexpected response code: 503
I0929 13:13:22.271897 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[43cb4fac-18a1-4e1f-b27e-f449ebe50f8e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x40007cdb40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004ab7c0 TLS:<nil>}
I0929 13:13:22.271974 1170270 retry.go:31] will retry after 1.382322ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.277089 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[43d86896-56f5-49c5-a5bc-1139a45fba20] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x40007cdbc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004543c0 TLS:<nil>}
I0929 13:13:22.277151 1170270 retry.go:31] will retry after 1.855482ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.282312 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[36371e1b-beaf-43b9-88ac-1790e7ae7f11] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x40007cdc40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000454500 TLS:<nil>}
I0929 13:13:22.282373 1170270 retry.go:31] will retry after 3.317696ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.289526 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4fa9d49e-3ea2-4e90-9d8d-7b99cbd0f905] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x4000718600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000454640 TLS:<nil>}
I0929 13:13:22.289589 1170270 retry.go:31] will retry after 3.902596ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.296956 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cf339ed6-f2c7-4bcf-8512-8814dbd075bf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x4000718700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004548c0 TLS:<nil>}
I0929 13:13:22.297016 1170270 retry.go:31] will retry after 4.191099ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.305277 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b3a04725-4e7b-4bdc-b6ea-dfba4cb3fa5c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x40007187c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000454a00 TLS:<nil>}
I0929 13:13:22.305339 1170270 retry.go:31] will retry after 5.827675ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.315155 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[da90cb13-6087-469d-90a3-3d0d18ab0ea3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x40007cdd00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000454b40 TLS:<nil>}
I0929 13:13:22.315223 1170270 retry.go:31] will retry after 14.413874ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.333512 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1fc68c17-6ff5-4e6e-bf17-47db06793907] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x40007cdd80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000454c80 TLS:<nil>}
I0929 13:13:22.333576 1170270 retry.go:31] will retry after 26.370673ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.363973 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[92b3622c-257a-4cc9-9d11-263ab4b7d521] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x4000718940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004ab900 TLS:<nil>}
I0929 13:13:22.364043 1170270 retry.go:31] will retry after 28.064363ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.395290 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[23c84596-4645-414c-95d4-0e591d798913] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x40007189c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004aba40 TLS:<nil>}
I0929 13:13:22.395377 1170270 retry.go:31] will retry after 60.423299ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.459702 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9a340133-e6be-442e-bc2a-e07a14c97fb4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x4000718a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004abb80 TLS:<nil>}
I0929 13:13:22.459760 1170270 retry.go:31] will retry after 68.343407ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.532700 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1d4d85f2-0988-4a5f-86f7-47cf9334c2d6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x4001584080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000454dc0 TLS:<nil>}
I0929 13:13:22.532764 1170270 retry.go:31] will retry after 115.64046ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.652124 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8ef36cc1-4f0d-47a5-a271-0ca4aa3f1c0f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x4001584100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000455040 TLS:<nil>}
I0929 13:13:22.652188 1170270 retry.go:31] will retry after 175.809179ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.831396 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2de806e7-5392-42b2-afd1-b497ce07eff0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x4001584180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000455180 TLS:<nil>}
I0929 13:13:22.831462 1170270 retry.go:31] will retry after 315.899093ms: Temporary Error: unexpected response code: 503
I0929 13:13:23.150959 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9fa70c79-c875-4bfd-b547-70dda0ddbb0d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:23 GMT]] Body:0x4001584200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004552c0 TLS:<nil>}
I0929 13:13:23.151024 1170270 retry.go:31] will retry after 256.825679ms: Temporary Error: unexpected response code: 503
I0929 13:13:23.411331 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9e964bf4-8914-4de0-8efa-9340d260270e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:23 GMT]] Body:0x4000718cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004abcc0 TLS:<nil>}
I0929 13:13:23.411393 1170270 retry.go:31] will retry after 627.646157ms: Temporary Error: unexpected response code: 503
I0929 13:13:24.042500 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b43d3ec7-3ef4-4e7b-ac41-afb3174302fb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:24 GMT]] Body:0x4001584300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000455400 TLS:<nil>}
I0929 13:13:24.042573 1170270 retry.go:31] will retry after 484.981865ms: Temporary Error: unexpected response code: 503
I0929 13:13:24.530707 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[53788a14-ad24-472e-8051-b7bd1eb4083e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:24 GMT]] Body:0x4001584380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004abe00 TLS:<nil>}
I0929 13:13:24.530781 1170270 retry.go:31] will retry after 850.947667ms: Temporary Error: unexpected response code: 503
I0929 13:13:25.386151 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[80078a32-90ef-404e-93e9-77570fb205ec] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:25 GMT]] Body:0x4000718e40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400032b7c0 TLS:<nil>}
I0929 13:13:25.386218 1170270 retry.go:31] will retry after 1.588942309s: Temporary Error: unexpected response code: 503
I0929 13:13:26.985471 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[64684a67-6df1-4d8f-a4ff-e522a02ccce6] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:26 GMT]] Body:0x4000718f00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000455680 TLS:<nil>}
I0929 13:13:26.985536 1170270 retry.go:31] will retry after 2.823226816s: Temporary Error: unexpected response code: 503
I0929 13:13:29.811870 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b7a09164-9f92-4e10-a110-b6fdf6b51a3a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:29 GMT]] Body:0x4000718f80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400032bb80 TLS:<nil>}
I0929 13:13:29.811931 1170270 retry.go:31] will retry after 2.893886865s: Temporary Error: unexpected response code: 503
I0929 13:13:32.709431 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[523890c4-4bd4-48a8-894c-2c931750e971] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:32 GMT]] Body:0x4000719000 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400032bcc0 TLS:<nil>}
I0929 13:13:32.709488 1170270 retry.go:31] will retry after 8.373400345s: Temporary Error: unexpected response code: 503
I0929 13:13:41.086440 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ed239d1f-fb20-4ca3-a0a0-9fffa2a27cf5] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:41 GMT]] Body:0x40015845c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004557c0 TLS:<nil>}
I0929 13:13:41.086522 1170270 retry.go:31] will retry after 7.261294639s: Temporary Error: unexpected response code: 503
I0929 13:13:48.353515 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7c68d894-00c1-4f11-9f7d-68fb7ea96505] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:48 GMT]] Body:0x4001584680 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000455900 TLS:<nil>}
I0929 13:13:48.353588 1170270 retry.go:31] will retry after 7.692540089s: Temporary Error: unexpected response code: 503
I0929 13:13:56.050316 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6ef24314-7f58-4fc7-81fe-0642cceb0f6a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:56 GMT]] Body:0x4001584740 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000455a40 TLS:<nil>}
I0929 13:13:56.050376 1170270 retry.go:31] will retry after 15.886612511s: Temporary Error: unexpected response code: 503
I0929 13:14:11.940432 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9cd8eb9d-35f7-407a-bfcf-dfbc25b6cd1d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:14:11 GMT]] Body:0x4001584800 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000220000 TLS:<nil>}
I0929 13:14:11.940496 1170270 retry.go:31] will retry after 18.71422801s: Temporary Error: unexpected response code: 503
I0929 13:14:30.658462 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5a077466-e64b-4f7c-a071-52c03354b87e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:14:30 GMT]] Body:0x40015848c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000455b80 TLS:<nil>}
I0929 13:14:30.658520 1170270 retry.go:31] will retry after 31.558868806s: Temporary Error: unexpected response code: 503
I0929 13:15:02.220947 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a9724194-c546-4325-80de-981b54910d11] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:15:02 GMT]] Body:0x4001584980 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000455cc0 TLS:<nil>}
I0929 13:15:02.221012 1170270 retry.go:31] will retry after 50.020817285s: Temporary Error: unexpected response code: 503
I0929 13:15:52.244706 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[401f4503-2e61-4a1b-871c-51664e4d6281] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:15:52 GMT]] Body:0x4000718040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000220140 TLS:<nil>}
I0929 13:15:52.244802 1170270 retry.go:31] will retry after 1m28.693408734s: Temporary Error: unexpected response code: 503
I0929 13:17:20.942432 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1d44e4a6-f29e-4191-874a-305a65a590f0] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:17:20 GMT]] Body:0x4001584180 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000454000 TLS:<nil>}
I0929 13:17:20.942664 1170270 retry.go:31] will retry after 1m22.712590338s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect functional-085003
helpers_test.go:243: (dbg) docker inspect functional-085003:
-- stdout --
[
{
"Id": "808859ee6cd90c0edf7ef87af5e3d7142ab71f43434ae365a1a794f1193cb199",
"Created": "2025-09-29T13:09:32.739049483Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 1153948,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-09-29T13:09:32.807023038Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
"ResolvConfPath": "/var/lib/docker/containers/808859ee6cd90c0edf7ef87af5e3d7142ab71f43434ae365a1a794f1193cb199/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/808859ee6cd90c0edf7ef87af5e3d7142ab71f43434ae365a1a794f1193cb199/hostname",
"HostsPath": "/var/lib/docker/containers/808859ee6cd90c0edf7ef87af5e3d7142ab71f43434ae365a1a794f1193cb199/hosts",
"LogPath": "/var/lib/docker/containers/808859ee6cd90c0edf7ef87af5e3d7142ab71f43434ae365a1a794f1193cb199/808859ee6cd90c0edf7ef87af5e3d7142ab71f43434ae365a1a794f1193cb199-json.log",
"Name": "/functional-085003",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-085003:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-085003",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "808859ee6cd90c0edf7ef87af5e3d7142ab71f43434ae365a1a794f1193cb199",
"LowerDir": "/var/lib/docker/overlay2/4351ef35506854cbb363c337eff050f44c53940225172eba186da1c8b60a4277-init/diff:/var/lib/docker/overlay2/131eb13c105941e1413431255a86d3f8e028faf09e8615e9e5b8dbe91366a7f8/diff",
"MergedDir": "/var/lib/docker/overlay2/4351ef35506854cbb363c337eff050f44c53940225172eba186da1c8b60a4277/merged",
"UpperDir": "/var/lib/docker/overlay2/4351ef35506854cbb363c337eff050f44c53940225172eba186da1c8b60a4277/diff",
"WorkDir": "/var/lib/docker/overlay2/4351ef35506854cbb363c337eff050f44c53940225172eba186da1c8b60a4277/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-085003",
"Source": "/var/lib/docker/volumes/functional-085003/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-085003",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-085003",
"name.minikube.sigs.k8s.io": "functional-085003",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "6dcbe98b19adcc45964d77ed66b84e986f77ac2325acbbf0dac3fa996b9c5a18",
"SandboxKey": "/var/run/docker/netns/6dcbe98b19ad",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33933"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33934"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33937"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33935"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33936"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-085003": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "9a:2a:36:ce:65:e0",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "3ce27eabbb5598261257b94b8abdd2a97a18edc168a634dd1aca7dad29ec8ffe",
"EndpointID": "06229491afe6d23fa4576a4176d09fc56361e77a853f195c0bd8feb4168ed161",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-085003",
"808859ee6cd9"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-085003 -n functional-085003
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-arm64 -p functional-085003 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-085003 logs -n 25: (1.221381335s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
==> Audit <==
┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ functional-085003 ssh stat /mount-9p/created-by-pod │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
│ ssh │ functional-085003 ssh sudo umount -f /mount-9p │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
│ mount │ -p functional-085003 /tmp/TestFunctionalparallelMountCmdspecific-port2215760306/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ │
│ ssh │ functional-085003 ssh findmnt -T /mount-9p | grep 9p │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ │
│ ssh │ functional-085003 ssh findmnt -T /mount-9p | grep 9p │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
│ ssh │ functional-085003 ssh -- ls -la /mount-9p │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
│ ssh │ functional-085003 ssh sudo umount -f /mount-9p │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ │
│ mount │ -p functional-085003 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2748648500/001:/mount2 --alsologtostderr -v=1 │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ │
│ ssh │ functional-085003 ssh findmnt -T /mount1 │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ │
│ mount │ -p functional-085003 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2748648500/001:/mount1 --alsologtostderr -v=1 │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ │
│ mount │ -p functional-085003 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2748648500/001:/mount3 --alsologtostderr -v=1 │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ │
│ ssh │ functional-085003 ssh findmnt -T /mount1 │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
│ ssh │ functional-085003 ssh findmnt -T /mount2 │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
│ ssh │ functional-085003 ssh findmnt -T /mount3 │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
│ mount │ -p functional-085003 --kill=true │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ │
│ update-context │ functional-085003 update-context --alsologtostderr -v=2 │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
│ update-context │ functional-085003 update-context --alsologtostderr -v=2 │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
│ update-context │ functional-085003 update-context --alsologtostderr -v=2 │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
│ image │ functional-085003 image ls --format short --alsologtostderr │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
│ image │ functional-085003 image ls --format yaml --alsologtostderr │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
│ ssh │ functional-085003 ssh pgrep buildkitd │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ │
│ image │ functional-085003 image build -t localhost/my-image:functional-085003 testdata/build --alsologtostderr │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
│ image │ functional-085003 image ls │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
│ image │ functional-085003 image ls --format json --alsologtostderr │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
│ image │ functional-085003 image ls --format table --alsologtostderr │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/09/29 13:13:20
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.24.6 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0929 13:13:20.315190 1170191 out.go:360] Setting OutFile to fd 1 ...
I0929 13:13:20.315402 1170191 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 13:13:20.315415 1170191 out.go:374] Setting ErrFile to fd 2...
I0929 13:13:20.315421 1170191 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 13:13:20.315795 1170191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
I0929 13:13:20.316202 1170191 out.go:368] Setting JSON to false
I0929 13:13:20.317277 1170191 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17753,"bootTime":1759133848,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0929 13:13:20.317357 1170191 start.go:140] virtualization:
I0929 13:13:20.320787 1170191 out.go:179] * [functional-085003] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
I0929 13:13:20.323722 1170191 notify.go:220] Checking for updates...
I0929 13:13:20.324274 1170191 out.go:179] - MINIKUBE_LOCATION=21652
I0929 13:13:20.327568 1170191 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0929 13:13:20.330545 1170191 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
I0929 13:13:20.338424 1170191 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
I0929 13:13:20.342498 1170191 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I0929 13:13:20.345440 1170191 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I0929 13:13:20.349413 1170191 config.go:182] Loaded profile config "functional-085003": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 13:13:20.350013 1170191 driver.go:421] Setting default libvirt URI to qemu:///system
I0929 13:13:20.396652 1170191 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
I0929 13:13:20.396771 1170191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0929 13:13:20.473351 1170191 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-29 13:13:20.463525815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0929 13:13:20.473594 1170191 docker.go:318] overlay module found
I0929 13:13:20.476738 1170191 out.go:179] * Utilisation du pilote docker basé sur le profil existant
I0929 13:13:20.479675 1170191 start.go:304] selected driver: docker
I0929 13:13:20.479707 1170191 start.go:924] validating driver "docker" against &{Name:functional-085003 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-085003 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0929 13:13:20.479799 1170191 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0929 13:13:20.483411 1170191 out.go:203]
W0929 13:13:20.486380 1170191 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
I0929 13:13:20.489356 1170191 out.go:203]
==> Docker <==
Sep 29 13:13:23 functional-085003 dockerd[6873]: time="2025-09-29T13:13:23.922247482Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 13:13:23 functional-085003 dockerd[6873]: time="2025-09-29T13:13:23.973993568Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Sep 29 13:13:23 functional-085003 dockerd[6873]: time="2025-09-29T13:13:23.995034246Z" level=info msg="ignoring event" container=97992fe39d6a920c959784bd8e31624aa0c83f50ed8f6165bfded1fd43110101 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 29 13:13:24 functional-085003 dockerd[6873]: time="2025-09-29T13:13:24.060226640Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 13:13:24 functional-085003 dockerd[6873]: time="2025-09-29T13:13:24.775431945Z" level=info msg="ignoring event" container=18bc4443d3e4d9af7b72f950e4941208e5da24bfe924bea5e01b59419fb792a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 29 13:13:24 functional-085003 dockerd[6873]: time="2025-09-29T13:13:24.843701754Z" level=info msg="ignoring event" container=1a1634225e89e791dbef2ad7cc6f4044a4054db1c70cd05ac8dd607c56d21959 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 29 13:13:25 functional-085003 cri-dockerd[7633]: time="2025-09-29T13:13:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7f6576ee2b46e39941a14892a0ce7421e43a6bb9f5997df089f68f17cfdcfc7d/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Sep 29 13:13:25 functional-085003 cri-dockerd[7633]: time="2025-09-29T13:13:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4f7ebe077ff342b988935a3f0ed1f6e6e3092536387183b611c8f96c9117e0d0/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Sep 29 13:13:25 functional-085003 dockerd[6873]: time="2025-09-29T13:13:25.822019435Z" level=info msg="ignoring event" container=7ea893543aba8811162f8b2b53a9acf176a9f6308165245cb76b0092f52d21ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 29 13:13:38 functional-085003 dockerd[6873]: time="2025-09-29T13:13:38.375503527Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Sep 29 13:13:38 functional-085003 dockerd[6873]: time="2025-09-29T13:13:38.469087784Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 13:13:39 functional-085003 dockerd[6873]: time="2025-09-29T13:13:39.372306219Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Sep 29 13:13:39 functional-085003 dockerd[6873]: time="2025-09-29T13:13:39.457309412Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 13:14:01 functional-085003 dockerd[6873]: time="2025-09-29T13:14:01.386439014Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Sep 29 13:14:01 functional-085003 dockerd[6873]: time="2025-09-29T13:14:01.475908660Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 13:14:04 functional-085003 dockerd[6873]: time="2025-09-29T13:14:04.382462842Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Sep 29 13:14:04 functional-085003 dockerd[6873]: time="2025-09-29T13:14:04.471105680Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 13:14:52 functional-085003 dockerd[6873]: time="2025-09-29T13:14:52.383759426Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Sep 29 13:14:52 functional-085003 dockerd[6873]: time="2025-09-29T13:14:52.491469286Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 13:14:54 functional-085003 dockerd[6873]: time="2025-09-29T13:14:54.377680811Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Sep 29 13:14:54 functional-085003 dockerd[6873]: time="2025-09-29T13:14:54.469318489Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 13:16:20 functional-085003 dockerd[6873]: time="2025-09-29T13:16:20.384288557Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Sep 29 13:16:20 functional-085003 dockerd[6873]: time="2025-09-29T13:16:20.490186962Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 13:16:22 functional-085003 dockerd[6873]: time="2025-09-29T13:16:22.378751324Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Sep 29 13:16:22 functional-085003 dockerd[6873]: time="2025-09-29T13:16:22.466725292Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
97992fe39d6a9 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 4 minutes ago Exited mount-munger 0 7ea893543aba8 busybox-mount
8f81d89a03581 nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e 5 minutes ago Running myfrontend 0 efb5a801b6e32 sp-pod
c802a2b4c1896 kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 5 minutes ago Running echo-server 0 09fe235bf6297 hello-node-75c85bcc94-x9877
d4f47bf1a64ff kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 5 minutes ago Running echo-server 0 ce916d84550eb hello-node-connect-7d85dfc575-tk885
a9f0809fdcb35 nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8 5 minutes ago Running nginx 0 a87a129b9b23c nginx-svc
813a12a4bb87c 6fc32d66c1411 5 minutes ago Running kube-proxy 3 e504852c80e84 kube-proxy-dcjhv
74a272a8012a4 138784d87c9c5 5 minutes ago Running coredns 2 a00f1a467827d coredns-66bc5c9577-gcpkj
edbc65c8dbf7e ba04bb24b9575 6 minutes ago Running storage-provisioner 3 e24fc34c77368 storage-provisioner
78e62c3b505c1 a1894772a478e 6 minutes ago Running etcd 2 c3bcf2050ccf3 etcd-functional-085003
fd58c889dfb04 a25f5ef9c34c3 6 minutes ago Running kube-scheduler 3 096fdcc1c3544 kube-scheduler-functional-085003
ae17de939d81b d291939e99406 6 minutes ago Running kube-apiserver 0 570fdddd688b1 kube-apiserver-functional-085003
9725cf38cb6d4 996be7e86d9b3 6 minutes ago Running kube-controller-manager 3 e478c3657f6c1 kube-controller-manager-functional-085003
e4cc66c08b947 996be7e86d9b3 6 minutes ago Created kube-controller-manager 2 dcb03e1de22c3 kube-controller-manager-functional-085003
3de6c7074f1ff a25f5ef9c34c3 6 minutes ago Created kube-scheduler 2 6ddcdc36d3c35 kube-scheduler-functional-085003
860e9b282b4e5 6fc32d66c1411 6 minutes ago Created kube-proxy 2 e0674ef3ec646 kube-proxy-dcjhv
1efca39d65ab9 ba04bb24b9575 6 minutes ago Exited storage-provisioner 2 07c55813a683a storage-provisioner
63e129dd664e8 138784d87c9c5 7 minutes ago Exited coredns 1 29f21fe92d2a1 coredns-66bc5c9577-gcpkj
d777207fbabf0 a1894772a478e 7 minutes ago Exited etcd 1 2cf4e0cbe8eec etcd-functional-085003
==> coredns [63e129dd664e] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.12.1
linux/arm64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:46073 - 23168 "HINFO IN 6857215695404878237.5113018390212035553. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013457548s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> coredns [74a272a8012a] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.12.1
linux/arm64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:37438 - 18741 "HINFO IN 8008677431241937798.8580229539547384357. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024601397s
==> describe nodes <==
Name: functional-085003
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=functional-085003
kubernetes.io/os=linux
minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
minikube.k8s.io/name=functional-085003
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_09_29T13_09_56_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 29 Sep 2025 13:09:52 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-085003
AcquireTime: <unset>
RenewTime: Mon, 29 Sep 2025 13:18:18 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 29 Sep 2025 13:13:51 +0000 Mon, 29 Sep 2025 13:09:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 29 Sep 2025 13:13:51 +0000 Mon, 29 Sep 2025 13:09:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 29 Sep 2025 13:13:51 +0000 Mon, 29 Sep 2025 13:09:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 29 Sep 2025 13:13:51 +0000 Mon, 29 Sep 2025 13:09:52 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-085003
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
System Info:
Machine ID: 04fcb50a9a8a45e5bca583ff33deba90
System UUID: 7a64509d-22b6-4698-b144-02838e29693b
Boot ID: b9a0c89a-b2b5-4b29-bf62-29a4a55f08f1
Kernel Version: 5.15.0-1084-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://28.4.0
Kubelet Version: v1.34.0
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-75c85bcc94-x9877 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m11s
default hello-node-connect-7d85dfc575-tk885 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m21s
default nginx-svc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m31s
default sp-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m10s
kube-system coredns-66bc5c9577-gcpkj 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m21s
kube-system etcd-functional-085003 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m26s
kube-system kube-apiserver-functional-085003 250m (12%) 0 (0%) 0 (0%) 0 (0%) 6m
kube-system kube-controller-manager-functional-085003 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m26s
kube-system kube-proxy-dcjhv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m21s
kube-system kube-scheduler-functional-085003 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m26s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m20s
kubernetes-dashboard dashboard-metrics-scraper-77bf4d6c4c-7n6xx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m
kubernetes-dashboard kubernetes-dashboard-855c9754f9-4dm9l 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%) 0 (0%)
memory 170Mi (2%) 170Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 8m19s kube-proxy
Normal Starting 5m58s kube-proxy
Normal Starting 7m3s kube-proxy
Warning CgroupV1 8m33s kubelet cgroup v1 support is in maintenance mode, please migrate to cgroup v2
Normal NodeHasSufficientMemory 8m33s (x8 over 8m33s) kubelet Node functional-085003 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m33s (x8 over 8m33s) kubelet Node functional-085003 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m33s (x7 over 8m33s) kubelet Node functional-085003 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m33s kubelet Updated Node Allocatable limit across pods
Normal NodeHasNoDiskPressure 8m26s kubelet Node functional-085003 status is now: NodeHasNoDiskPressure
Warning CgroupV1 8m26s kubelet cgroup v1 support is in maintenance mode, please migrate to cgroup v2
Normal NodeAllocatableEnforced 8m26s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 8m26s kubelet Node functional-085003 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 8m26s kubelet Node functional-085003 status is now: NodeHasSufficientPID
Normal Starting 8m26s kubelet Starting kubelet.
Normal RegisteredNode 8m22s node-controller Node functional-085003 event: Registered Node functional-085003 in Controller
Normal NodeNotReady 7m15s kubelet Node functional-085003 status is now: NodeNotReady
Normal RegisteredNode 7m2s node-controller Node functional-085003 event: Registered Node functional-085003 in Controller
Warning ContainerGCFailed 6m26s (x2 over 7m26s) kubelet rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Normal NodeHasNoDiskPressure 6m7s (x8 over 6m7s) kubelet Node functional-085003 status is now: NodeHasNoDiskPressure
Warning CgroupV1 6m7s kubelet cgroup v1 support is in maintenance mode, please migrate to cgroup v2
Normal NodeHasSufficientMemory 6m7s (x8 over 6m7s) kubelet Node functional-085003 status is now: NodeHasSufficientMemory
Normal Starting 6m7s kubelet Starting kubelet.
Normal NodeHasSufficientPID 6m7s (x7 over 6m7s) kubelet Node functional-085003 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m7s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 5m58s node-controller Node functional-085003 event: Registered Node functional-085003 in Controller
==> dmesg <==
[Sep29 11:47] kauditd_printk_skb: 8 callbacks suppressed
[Sep29 12:09] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Sep29 13:01] kauditd_printk_skb: 8 callbacks suppressed
==> etcd [78e62c3b505c] <==
{"level":"warn","ts":"2025-09-29T13:12:19.273158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39816","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.287652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39846","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.303681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39862","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.318910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39880","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.334142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39892","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.348263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39900","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.366386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39912","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.381352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39922","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.396482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39946","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.419804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39966","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.433952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39984","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.450381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40002","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.466784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40030","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.481567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40052","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.500959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40078","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.513423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40090","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.529062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40100","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.544556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40122","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.559607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40128","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.575736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40144","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.596035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40162","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.623761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40176","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.638658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40194","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.653183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40220","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:12:19.722685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40226","server-name":"","error":"EOF"}
==> etcd [d777207fbabf] <==
{"level":"warn","ts":"2025-09-29T13:11:16.172828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56816","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:11:16.194673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56840","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:11:16.213943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56862","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:11:16.254695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56890","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:11:16.265879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56906","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:11:16.290712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56928","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T13:11:16.469568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56950","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-09-29T13:11:58.380593Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2025-09-29T13:11:58.380654Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-085003","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
{"level":"error","ts":"2025-09-29T13:11:58.380770Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-09-29T13:12:05.383074Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"warn","ts":"2025-09-29T13:12:05.383418Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-09-29T13:12:05.383503Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"error","ts":"2025-09-29T13:12:05.383554Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"warn","ts":"2025-09-29T13:12:05.383914Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
{"level":"warn","ts":"2025-09-29T13:12:05.383987Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
{"level":"error","ts":"2025-09-29T13:12:05.384039Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"error","ts":"2025-09-29T13:12:05.383174Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-09-29T13:12:05.387995Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2025-09-29T13:12:05.390967Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
{"level":"info","ts":"2025-09-29T13:12:05.390985Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
{"level":"info","ts":"2025-09-29T13:12:05.394950Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"error","ts":"2025-09-29T13:12:05.395040Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-09-29T13:12:05.395077Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2025-09-29T13:12:05.395090Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-085003","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
==> kernel <==
13:18:21 up 5:00, 0 users, load average: 0.06, 0.81, 1.86
Linux functional-085003 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kube-apiserver [ae17de939d81] <==
I0929 13:12:21.440895 1 controller.go:667] quota admission added evaluator for: serviceaccounts
I0929 13:12:22.221470 1 controller.go:667] quota admission added evaluator for: deployments.apps
I0929 13:12:22.267292 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I0929 13:12:22.308851 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0929 13:12:22.317592 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0929 13:12:23.880412 1 controller.go:667] quota admission added evaluator for: endpoints
I0929 13:12:24.128422 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0929 13:12:24.279822 1 controller.go:667] quota admission added evaluator for: replicasets.apps
I0929 13:12:37.430512 1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.3.165"}
I0929 13:12:50.665285 1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.246.229"}
I0929 13:13:00.520989 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.60.160"}
I0929 13:13:10.217290 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.223.205"}
E0929 13:13:11.213362 1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
E0929 13:13:18.597022 1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:51700: use of closed network connection
I0929 13:13:21.788055 1 controller.go:667] quota admission added evaluator for: namespaces
I0929 13:13:22.096442 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.146.72"}
I0929 13:13:22.119960 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.215.166"}
I0929 13:13:28.498338 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0929 13:13:29.829449 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0929 13:14:40.133903 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0929 13:14:58.827623 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0929 13:16:09.059467 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0929 13:16:13.894607 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0929 13:17:15.760270 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0929 13:17:20.174821 1 stats.go:136] "Error getting keys" err="empty key: \"\""
==> kube-controller-manager [9725cf38cb6d] <==
I0929 13:12:23.897356 1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
I0929 13:12:23.897364 1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
I0929 13:12:23.897371 1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
I0929 13:12:23.900258 1 shared_informer.go:356] "Caches are synced" controller="GC"
I0929 13:12:23.901500 1 shared_informer.go:356] "Caches are synced" controller="taint"
I0929 13:12:23.901641 1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
I0929 13:12:23.901764 1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-085003"
I0929 13:12:23.901837 1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
I0929 13:12:23.907200 1 shared_informer.go:356] "Caches are synced" controller="attach detach"
I0929 13:12:23.913942 1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
I0929 13:12:23.917200 1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
I0929 13:12:23.921838 1 shared_informer.go:356] "Caches are synced" controller="job"
I0929 13:12:23.923082 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
I0929 13:12:23.923090 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
I0929 13:12:23.923105 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
I0929 13:12:23.923114 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
I0929 13:12:23.926614 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
I0929 13:12:23.931917 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I0929 13:12:23.931945 1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
I0929 13:12:23.931954 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
I0929 13:12:23.936566 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E0929 13:13:21.903075 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E0929 13:13:21.906119 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E0929 13:13:21.921822 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E0929 13:13:21.925611 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
==> kube-controller-manager [e4cc66c08b94] <==
==> kube-proxy [813a12a4bb87] <==
I0929 13:12:22.529871 1 server_linux.go:53] "Using iptables proxy"
I0929 13:12:22.829441 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I0929 13:12:22.938765 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I0929 13:12:22.938809 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
E0929 13:12:22.938875 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0929 13:12:23.000374 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0929 13:12:23.000902 1 server_linux.go:132] "Using iptables Proxier"
I0929 13:12:23.010480 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0929 13:12:23.010947 1 server.go:527] "Version info" version="v1.34.0"
I0929 13:12:23.011886 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0929 13:12:23.013735 1 config.go:200] "Starting service config controller"
I0929 13:12:23.015555 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I0929 13:12:23.013918 1 config.go:106] "Starting endpoint slice config controller"
I0929 13:12:23.015772 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I0929 13:12:23.013948 1 config.go:403] "Starting serviceCIDR config controller"
I0929 13:12:23.015789 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I0929 13:12:23.021359 1 config.go:309] "Starting node config controller"
I0929 13:12:23.021383 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I0929 13:12:23.021391 1 shared_informer.go:356] "Caches are synced" controller="node config"
I0929 13:12:23.116650 1 shared_informer.go:356] "Caches are synced" controller="service config"
I0929 13:12:23.116744 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I0929 13:12:23.116789 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
==> kube-proxy [860e9b282b4e] <==
==> kube-scheduler [3de6c7074f1f] <==
==> kube-scheduler [fd58c889dfb0] <==
I0929 13:12:20.459207 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0929 13:12:20.465442 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0929 13:12:20.465659 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0929 13:12:20.466739 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I0929 13:12:20.466812 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
E0929 13:12:20.489038 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E0929 13:12:20.489354 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E0929 13:12:20.489420 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E0929 13:12:20.489549 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E0929 13:12:20.489668 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E0929 13:12:20.489755 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E0929 13:12:20.489919 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E0929 13:12:20.490034 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E0929 13:12:20.490207 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E0929 13:12:20.490217 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E0929 13:12:20.490391 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E0929 13:12:20.490459 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E0929 13:12:20.490515 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E0929 13:12:20.496883 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E0929 13:12:20.497097 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E0929 13:12:20.497279 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E0929 13:12:20.497491 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E0929 13:12:20.497665 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E0929 13:12:20.497839 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
I0929 13:12:22.066783 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Sep 29 13:16:20 functional-085003 kubelet[8642]: E0929 13:16:20.493643 8642 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Sep 29 13:16:20 functional-085003 kubelet[8642]: E0929 13:16:20.493727 8642 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-4dm9l_kubernetes-dashboard(36569db3-c3cc-4e98-bc60-50502bd2cb31): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
Sep 29 13:16:20 functional-085003 kubelet[8642]: E0929 13:16:20.493762 8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4dm9l" podUID="36569db3-c3cc-4e98-bc60-50502bd2cb31"
Sep 29 13:16:22 functional-085003 kubelet[8642]: E0929 13:16:22.469950 8642 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Sep 29 13:16:22 functional-085003 kubelet[8642]: E0929 13:16:22.470008 8642 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Sep 29 13:16:22 functional-085003 kubelet[8642]: E0929 13:16:22.470088 8642 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-7n6xx_kubernetes-dashboard(b22190f4-f2ef-47d5-9c65-4b4e3c1b9906): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
Sep 29 13:16:22 functional-085003 kubelet[8642]: E0929 13:16:22.470123 8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7n6xx" podUID="b22190f4-f2ef-47d5-9c65-4b4e3c1b9906"
Sep 29 13:16:33 functional-085003 kubelet[8642]: E0929 13:16:33.334460 8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4dm9l" podUID="36569db3-c3cc-4e98-bc60-50502bd2cb31"
Sep 29 13:16:38 functional-085003 kubelet[8642]: E0929 13:16:38.335151 8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7n6xx" podUID="b22190f4-f2ef-47d5-9c65-4b4e3c1b9906"
Sep 29 13:16:44 functional-085003 kubelet[8642]: E0929 13:16:44.336570 8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4dm9l" podUID="36569db3-c3cc-4e98-bc60-50502bd2cb31"
Sep 29 13:16:50 functional-085003 kubelet[8642]: E0929 13:16:50.336649 8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7n6xx" podUID="b22190f4-f2ef-47d5-9c65-4b4e3c1b9906"
Sep 29 13:16:55 functional-085003 kubelet[8642]: E0929 13:16:55.332420 8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4dm9l" podUID="36569db3-c3cc-4e98-bc60-50502bd2cb31"
Sep 29 13:17:03 functional-085003 kubelet[8642]: E0929 13:17:03.332947 8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7n6xx" podUID="b22190f4-f2ef-47d5-9c65-4b4e3c1b9906"
Sep 29 13:17:07 functional-085003 kubelet[8642]: E0929 13:17:07.333209 8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4dm9l" podUID="36569db3-c3cc-4e98-bc60-50502bd2cb31"
Sep 29 13:17:17 functional-085003 kubelet[8642]: E0929 13:17:17.332972 8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7n6xx" podUID="b22190f4-f2ef-47d5-9c65-4b4e3c1b9906"
Sep 29 13:17:20 functional-085003 kubelet[8642]: E0929 13:17:20.340303 8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4dm9l" podUID="36569db3-c3cc-4e98-bc60-50502bd2cb31"
Sep 29 13:17:29 functional-085003 kubelet[8642]: E0929 13:17:29.332499 8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7n6xx" podUID="b22190f4-f2ef-47d5-9c65-4b4e3c1b9906"
Sep 29 13:17:35 functional-085003 kubelet[8642]: E0929 13:17:35.333394 8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4dm9l" podUID="36569db3-c3cc-4e98-bc60-50502bd2cb31"
Sep 29 13:17:41 functional-085003 kubelet[8642]: E0929 13:17:41.333477 8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7n6xx" podUID="b22190f4-f2ef-47d5-9c65-4b4e3c1b9906"
Sep 29 13:17:47 functional-085003 kubelet[8642]: E0929 13:17:47.333049 8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4dm9l" podUID="36569db3-c3cc-4e98-bc60-50502bd2cb31"
Sep 29 13:17:55 functional-085003 kubelet[8642]: E0929 13:17:55.333437 8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7n6xx" podUID="b22190f4-f2ef-47d5-9c65-4b4e3c1b9906"
Sep 29 13:18:00 functional-085003 kubelet[8642]: E0929 13:18:00.334817 8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4dm9l" podUID="36569db3-c3cc-4e98-bc60-50502bd2cb31"
Sep 29 13:18:07 functional-085003 kubelet[8642]: E0929 13:18:07.332773 8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7n6xx" podUID="b22190f4-f2ef-47d5-9c65-4b4e3c1b9906"
Sep 29 13:18:12 functional-085003 kubelet[8642]: E0929 13:18:12.334171 8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4dm9l" podUID="36569db3-c3cc-4e98-bc60-50502bd2cb31"
Sep 29 13:18:19 functional-085003 kubelet[8642]: E0929 13:18:19.332942 8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7n6xx" podUID="b22190f4-f2ef-47d5-9c65-4b4e3c1b9906"
==> storage-provisioner [1efca39d65ab] <==
I0929 13:11:30.660122 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0929 13:11:30.660314 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
W0929 13:11:30.662679 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:11:34.117321 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:11:38.378349 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:11:41.976639 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:11:45.031493 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:11:48.054619 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:11:48.059934 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
I0929 13:11:48.060161 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0929 13:11:48.060363 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-085003_fa44cbea-4d4b-4476-b6da-1bfa78995fba!
I0929 13:11:48.061220 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e43ed296-0deb-4fde-872b-2c4d0fef1b50", APIVersion:"v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-085003_fa44cbea-4d4b-4476-b6da-1bfa78995fba became leader
W0929 13:11:48.064059 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:11:48.070256 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
I0929 13:11:48.161084 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-085003_fa44cbea-4d4b-4476-b6da-1bfa78995fba!
W0929 13:11:50.073919 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:11:50.079098 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:11:52.082333 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:11:52.089776 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:11:54.092852 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:11:54.098082 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:11:56.102374 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:11:56.109572 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:11:58.114591 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:11:58.119813 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
==> storage-provisioner [edbc65c8dbf7] <==
W0929 13:17:57.097670 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:17:59.100570 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:17:59.107312 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:18:01.110860 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:18:01.115826 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:18:03.118413 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:18:03.123163 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:18:05.126398 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:18:05.133880 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:18:07.137664 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:18:07.142588 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:18:09.145387 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:18:09.149956 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:18:11.153373 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:18:11.158035 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:18:13.161823 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:18:13.169652 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:18:15.172974 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:18:15.177245 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:18:17.180095 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:18:17.184939 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:18:19.187772 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:18:19.192148 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:18:21.195546 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 13:18:21.200189 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-085003 -n functional-085003
helpers_test.go:269: (dbg) Run: kubectl --context functional-085003 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount dashboard-metrics-scraper-77bf4d6c4c-7n6xx kubernetes-dashboard-855c9754f9-4dm9l
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context functional-085003 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-7n6xx kubernetes-dashboard-855c9754f9-4dm9l
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-085003 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-7n6xx kubernetes-dashboard-855c9754f9-4dm9l: exit status 1 (97.426987ms)
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-085003/192.168.49.2
Start Time: Mon, 29 Sep 2025 13:13:21 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Succeeded
IP: 10.244.0.12
IPs:
IP: 10.244.0.12
Containers:
mount-munger:
Container ID: docker://97992fe39d6a920c959784bd8e31624aa0c83f50ed8f6165bfded1fd43110101
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID: docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 29 Sep 2025 13:13:23 +0000
Finished: Mon, 29 Sep 2025 13:13:23 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7ws7t (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-7ws7t:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m1s default-scheduler Successfully assigned default/busybox-mount to functional-085003
Normal Pulling 5m1s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal Pulled 4m59s kubelet Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.154s (2.155s including waiting). Image size: 3547125 bytes.
Normal Created 4m59s kubelet Created container: mount-munger
Normal Started 4m59s kubelet Started container mount-munger
-- /stdout --
** stderr **
Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-7n6xx" not found
Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-4dm9l" not found
** /stderr **
helpers_test.go:287: kubectl --context functional-085003 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-7n6xx kubernetes-dashboard-855c9754f9-4dm9l: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.29s)