=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-643455 --alsologtostderr -v=1]
E1115 09:29:54.694361 128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:30:35.655882 128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:31:57.577349 128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-643455 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-643455 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-643455 --alsologtostderr -v=1] stderr:
I1115 09:29:45.044041 175074 out.go:360] Setting OutFile to fd 1 ...
I1115 09:29:45.044326 175074 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:29:45.044339 175074 out.go:374] Setting ErrFile to fd 2...
I1115 09:29:45.044345 175074 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:29:45.044541 175074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-124770/.minikube/bin
I1115 09:29:45.044879 175074 mustload.go:66] Loading cluster: functional-643455
I1115 09:29:45.045307 175074 config.go:182] Loaded profile config "functional-643455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1115 09:29:45.045705 175074 cli_runner.go:164] Run: docker container inspect functional-643455 --format={{.State.Status}}
I1115 09:29:45.063631 175074 host.go:66] Checking if "functional-643455" exists ...
I1115 09:29:45.063905 175074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1115 09:29:45.125189 175074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-15 09:29:45.114902189 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1115 09:29:45.125360 175074 api_server.go:166] Checking apiserver status ...
I1115 09:29:45.125419 175074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1115 09:29:45.125467 175074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-643455
I1115 09:29:45.143415 175074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21894-124770/.minikube/machines/functional-643455/id_rsa Username:docker}
I1115 09:29:45.242030 175074 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5089/cgroup
W1115 09:29:45.250492 175074 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5089/cgroup: Process exited with status 1
stdout:
stderr:
I1115 09:29:45.250537 175074 ssh_runner.go:195] Run: ls
I1115 09:29:45.254408 175074 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1115 09:29:45.258510 175074 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1115 09:29:45.258553 175074 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1115 09:29:45.258694 175074 config.go:182] Loaded profile config "functional-643455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1115 09:29:45.258704 175074 addons.go:70] Setting dashboard=true in profile "functional-643455"
I1115 09:29:45.258710 175074 addons.go:239] Setting addon dashboard=true in "functional-643455"
I1115 09:29:45.258733 175074 host.go:66] Checking if "functional-643455" exists ...
I1115 09:29:45.259032 175074 cli_runner.go:164] Run: docker container inspect functional-643455 --format={{.State.Status}}
I1115 09:29:45.279446 175074 out.go:179] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1115 09:29:45.280828 175074 out.go:179] - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1115 09:29:45.282049 175074 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1115 09:29:45.282089 175074 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1115 09:29:45.282161 175074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-643455
I1115 09:29:45.300459 175074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21894-124770/.minikube/machines/functional-643455/id_rsa Username:docker}
I1115 09:29:45.401097 175074 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1115 09:29:45.401128 175074 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1115 09:29:45.414246 175074 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1115 09:29:45.414275 175074 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1115 09:29:45.427128 175074 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1115 09:29:45.427159 175074 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1115 09:29:45.440351 175074 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1115 09:29:45.440371 175074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1115 09:29:45.453210 175074 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1115 09:29:45.453239 175074 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1115 09:29:45.466080 175074 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1115 09:29:45.466104 175074 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1115 09:29:45.478812 175074 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1115 09:29:45.478832 175074 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1115 09:29:45.491640 175074 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1115 09:29:45.491661 175074 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1115 09:29:45.504516 175074 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1115 09:29:45.504544 175074 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1115 09:29:45.517511 175074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1115 09:29:45.971880 175074 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-643455 addons enable metrics-server
I1115 09:29:45.973034 175074 addons.go:202] Writing out "functional-643455" config to set dashboard=true...
W1115 09:29:45.973296 175074 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1115 09:29:45.973916 175074 kapi.go:59] client config for functional-643455: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.key", CAFile:"/home/jenkins/minikube-integration/21894-124770/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1115 09:29:45.974398 175074 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1115 09:29:45.974414 175074 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1115 09:29:45.974425 175074 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1115 09:29:45.974431 175074 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1115 09:29:45.974437 175074 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1115 09:29:45.981744 175074 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard 91378dff-53bd-4511-a040-05bfcf8186f1 791 0 2025-11-15 09:29:45 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-11-15 09:29:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.110.187.89,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.110.187.89],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1115 09:29:45.981948 175074 out.go:285] * Launching proxy ...
* Launching proxy ...
I1115 09:29:45.982025 175074 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-643455 proxy --port 36195]
I1115 09:29:45.982369 175074 dashboard.go:159] Waiting for kubectl to output host:port ...
I1115 09:29:46.024597 175074 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1115 09:29:46.024665 175074 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1115 09:29:46.032351 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ce848d28-28d6-46ba-8206-b1ae8767e37d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0007d2f40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028ab40 TLS:<nil>}
I1115 09:29:46.032460 175074 retry.go:31] will retry after 92.533µs: Temporary Error: unexpected response code: 503
I1115 09:29:46.037655 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5a961632-7f1b-4df1-ae1a-be818160e9ed] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc000aa9bc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008b77c0 TLS:<nil>}
I1115 09:29:46.037730 175074 retry.go:31] will retry after 124.982µs: Temporary Error: unexpected response code: 503
I1115 09:29:46.041049 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d5b748d0-fa30-430b-91f6-584421f84cc2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc000967600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cd040 TLS:<nil>}
I1115 09:29:46.041125 175074 retry.go:31] will retry after 153.888µs: Temporary Error: unexpected response code: 503
I1115 09:29:46.044208 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5b45c509-219c-430e-992e-f4bedd51f585] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0009676c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028af00 TLS:<nil>}
I1115 09:29:46.044266 175074 retry.go:31] will retry after 268.869µs: Temporary Error: unexpected response code: 503
I1115 09:29:46.047495 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7b289d8e-cf4a-4cb1-a370-000b61fd85b4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0007d3080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028b040 TLS:<nil>}
I1115 09:29:46.047536 175074 retry.go:31] will retry after 289.853µs: Temporary Error: unexpected response code: 503
I1115 09:29:46.050643 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d81701e5-5345-4a01-9ee2-a6697ea33e1e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0009677c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008b7900 TLS:<nil>}
I1115 09:29:46.050681 175074 retry.go:31] will retry after 803.679µs: Temporary Error: unexpected response code: 503
I1115 09:29:46.053731 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3b13f12a-c48d-487c-9a63-5fbfaa1bdf89] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc000aa9cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028b180 TLS:<nil>}
I1115 09:29:46.053786 175074 retry.go:31] will retry after 1.164048ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.057935 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dd6c1068-1a08-451e-92c3-c3ecf1c49982] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0009678c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cd180 TLS:<nil>}
I1115 09:29:46.057978 175074 retry.go:31] will retry after 1.188976ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.062172 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b7dbb167-34f1-4c84-aa73-eb19fce99ce3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0007d3180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028b2c0 TLS:<nil>}
I1115 09:29:46.062250 175074 retry.go:31] will retry after 1.458312ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.066605 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e7069d45-a177-4a7d-9145-a8a414282969] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0009679c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008b7cc0 TLS:<nil>}
I1115 09:29:46.066654 175074 retry.go:31] will retry after 2.60561ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.072109 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2da1e842-521d-4637-9aff-22299e1101e7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0007d3240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028b400 TLS:<nil>}
I1115 09:29:46.072166 175074 retry.go:31] will retry after 4.109761ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.079553 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[61ab0f81-51cc-4f67-98c4-027954e76030] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc000aa9f00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008b7e00 TLS:<nil>}
I1115 09:29:46.079595 175074 retry.go:31] will retry after 8.805052ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.091119 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cbb9cd51-3faa-44d8-a0bc-3e77c2ad6c2b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc000967a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cd2c0 TLS:<nil>}
I1115 09:29:46.091187 175074 retry.go:31] will retry after 14.222834ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.109406 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[00f2fa93-0374-477f-a44c-e7b407d4c47c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc000967b00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028b540 TLS:<nil>}
I1115 09:29:46.109470 175074 retry.go:31] will retry after 20.00421ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.132358 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[17288325-10de-43b5-9db2-f2d37e7983d2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0007d3340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028b680 TLS:<nil>}
I1115 09:29:46.132430 175074 retry.go:31] will retry after 38.600885ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.174566 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f4d52d14-7c49-4785-b634-4b2a50eabe13] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc000967c00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004e9680 TLS:<nil>}
I1115 09:29:46.174662 175074 retry.go:31] will retry after 64.48062ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.242945 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9615dd49-b746-4e20-8534-959a108682ef] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0007d3440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028b7c0 TLS:<nil>}
I1115 09:29:46.243012 175074 retry.go:31] will retry after 79.278043ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.326564 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[17f3e7f2-ab4c-4c60-b71f-3c5dd76bda1a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0008c2100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004e97c0 TLS:<nil>}
I1115 09:29:46.326654 175074 retry.go:31] will retry after 146.9789ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.477076 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[76f87f82-2b6a-47ea-8ebe-56d3f1236c15] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0008c2180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cd400 TLS:<nil>}
I1115 09:29:46.477142 175074 retry.go:31] will retry after 111.988301ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.593698 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f29c714a-ab2e-4015-b025-1b3a6e09cf96] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0007d3580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cd540 TLS:<nil>}
I1115 09:29:46.593765 175074 retry.go:31] will retry after 302.242022ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.899413 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2e57060b-765b-4a2b-889e-57fad33862d4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc000967d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004e9900 TLS:<nil>}
I1115 09:29:46.899480 175074 retry.go:31] will retry after 256.959249ms: Temporary Error: unexpected response code: 503
I1115 09:29:47.160022 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9c58d409-9e7a-41d0-be1c-671dce00cf9c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:47 GMT]] Body:0xc0007d3680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028b900 TLS:<nil>}
I1115 09:29:47.160114 175074 retry.go:31] will retry after 350.125562ms: Temporary Error: unexpected response code: 503
I1115 09:29:47.513744 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b356abf3-bfc8-4503-8f90-eea4fffacc09] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:47 GMT]] Body:0xc0007d3740 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004e9a40 TLS:<nil>}
I1115 09:29:47.513819 175074 retry.go:31] will retry after 1.085866206s: Temporary Error: unexpected response code: 503
I1115 09:29:48.602962 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[084129e7-57d8-4c61-a9c4-217a9c98b6f1] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:48 GMT]] Body:0xc0008c2280 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004e9b80 TLS:<nil>}
I1115 09:29:48.603028 175074 retry.go:31] will retry after 1.263396499s: Temporary Error: unexpected response code: 503
I1115 09:29:49.870521 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[edd3a503-2e07-45b7-9b63-e56d0b996c42] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:49 GMT]] Body:0xc0007d3840 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cd680 TLS:<nil>}
I1115 09:29:49.870604 175074 retry.go:31] will retry after 1.05300319s: Temporary Error: unexpected response code: 503
I1115 09:29:50.926985 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7dd78533-31b4-4465-a690-50de1b318898] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:50 GMT]] Body:0xc000967e80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004e9cc0 TLS:<nil>}
I1115 09:29:50.927049 175074 retry.go:31] will retry after 3.310876895s: Temporary Error: unexpected response code: 503
I1115 09:29:54.243630 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0650ec5b-e5b4-4e83-8740-b628a6e3aa4b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:54 GMT]] Body:0xc0007d3940 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028bb80 TLS:<nil>}
I1115 09:29:54.243697 175074 retry.go:31] will retry after 2.501745474s: Temporary Error: unexpected response code: 503
I1115 09:29:56.749134 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fd609a6b-6cb0-4718-ad4d-703a8cf13660] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:56 GMT]] Body:0xc0008c2380 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028bcc0 TLS:<nil>}
I1115 09:29:56.749196 175074 retry.go:31] will retry after 5.182712673s: Temporary Error: unexpected response code: 503
I1115 09:30:01.936734 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[68ffae2a-b995-4735-8830-014b4cc33c0a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:30:01 GMT]] Body:0xc0008fe980 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cd7c0 TLS:<nil>}
I1115 09:30:01.936806 175074 retry.go:31] will retry after 10.79189693s: Temporary Error: unexpected response code: 503
I1115 09:30:12.733127 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[102f7269-032b-42cf-b7b9-bb0467e2e9dc] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:30:12 GMT]] Body:0xc0008c2480 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003a0000 TLS:<nil>}
I1115 09:30:12.733204 175074 retry.go:31] will retry after 15.965097719s: Temporary Error: unexpected response code: 503
I1115 09:30:28.704391 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3b47b2cd-8591-4fa8-b088-629956bafbd0] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:30:28 GMT]] Body:0xc0008c2500 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003a0140 TLS:<nil>}
I1115 09:30:28.704467 175074 retry.go:31] will retry after 17.465830656s: Temporary Error: unexpected response code: 503
I1115 09:30:46.173985 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[87340e33-43d6-41ca-b133-755dd72e2ac3] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:30:46 GMT]] Body:0xc0007d3b00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cd900 TLS:<nil>}
I1115 09:30:46.174070 175074 retry.go:31] will retry after 36.160197642s: Temporary Error: unexpected response code: 503
I1115 09:31:22.338624 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[04904562-2cbd-42ea-bd95-454fc19beec4] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:31:22 GMT]] Body:0xc0007d3bc0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003a0280 TLS:<nil>}
I1115 09:31:22.338693 175074 retry.go:31] will retry after 59.29160483s: Temporary Error: unexpected response code: 503
I1115 09:32:21.634169 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9479e8c6-2a13-406d-a3e2-7f88c69e05fc] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:32:21 GMT]] Body:0xc0008c20c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003a03c0 TLS:<nil>}
I1115 09:32:21.634248 175074 retry.go:31] will retry after 1m8.35287164s: Temporary Error: unexpected response code: 503
I1115 09:33:29.993756 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[eab7ddfc-354e-45ab-9f36-878aa038a653] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:33:29 GMT]] Body:0xc0007d2ac0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003a0500 TLS:<nil>}
I1115 09:33:29.993826 175074 retry.go:31] will retry after 41.862939328s: Temporary Error: unexpected response code: 503
I1115 09:34:11.860353 175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[40097749-5499-46b5-bf11-e6683d7b0e15] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:34:11 GMT]] Body:0xc0008c20c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003a0640 TLS:<nil>}
I1115 09:34:11.860433 175074 retry.go:31] will retry after 1m21.415223078s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect functional-643455
helpers_test.go:243: (dbg) docker inspect functional-643455:
-- stdout --
[
{
"Id": "75d4c555182ef259c4fe3cf0e40dc50aaa963f9faa0a719174698fca1b7fbe0f",
"Created": "2025-11-15T09:27:38.460289529Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 158671,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-11-15T09:27:38.49284776Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
"ResolvConfPath": "/var/lib/docker/containers/75d4c555182ef259c4fe3cf0e40dc50aaa963f9faa0a719174698fca1b7fbe0f/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/75d4c555182ef259c4fe3cf0e40dc50aaa963f9faa0a719174698fca1b7fbe0f/hostname",
"HostsPath": "/var/lib/docker/containers/75d4c555182ef259c4fe3cf0e40dc50aaa963f9faa0a719174698fca1b7fbe0f/hosts",
"LogPath": "/var/lib/docker/containers/75d4c555182ef259c4fe3cf0e40dc50aaa963f9faa0a719174698fca1b7fbe0f/75d4c555182ef259c4fe3cf0e40dc50aaa963f9faa0a719174698fca1b7fbe0f-json.log",
"Name": "/functional-643455",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-643455:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "functional-643455",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "private",
"Dns": null,
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": null,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "75d4c555182ef259c4fe3cf0e40dc50aaa963f9faa0a719174698fca1b7fbe0f",
"LowerDir": "/var/lib/docker/overlay2/b6fb531e75d0eea8076d7f643cf1d8c98b7ecbdafda46cdb359559dfe5e18da2-init/diff:/var/lib/docker/overlay2/dd55a3984a0401bbe9c47729dc0fec07395bf4daab8d10377766fb7a6cf0f6d2/diff",
"MergedDir": "/var/lib/docker/overlay2/b6fb531e75d0eea8076d7f643cf1d8c98b7ecbdafda46cdb359559dfe5e18da2/merged",
"UpperDir": "/var/lib/docker/overlay2/b6fb531e75d0eea8076d7f643cf1d8c98b7ecbdafda46cdb359559dfe5e18da2/diff",
"WorkDir": "/var/lib/docker/overlay2/b6fb531e75d0eea8076d7f643cf1d8c98b7ecbdafda46cdb359559dfe5e18da2/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-643455",
"Source": "/var/lib/docker/volumes/functional-643455/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-643455",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-643455",
"name.minikube.sigs.k8s.io": "functional-643455",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"SandboxID": "029c034dcecc64b8ccca91cb8f52a0ca277442aca7cd6409ecdd0fb513d4f17f",
"SandboxKey": "/var/run/docker/netns/029c034dcecc",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32783"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32784"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32787"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32785"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32786"
}
]
},
"Networks": {
"functional-643455": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2",
"IPv6Address": ""
},
"Links": null,
"Aliases": null,
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "be24d09662bb1f50ee771e52c11387b4f471476e50e89b32b3a29bd33fc96223",
"EndpointID": "2c39da97daa59f3d6450a6acb87027688136e17fb9118a11649286155d98bd18",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"MacAddress": "d6:72:d6:b1:0e:d6",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-643455",
"75d4c555182e"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-643455 -n functional-643455
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p functional-643455 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-643455 logs -n 25: (1.247713372s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
==> Audit <==
┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ functional-643455 ssh findmnt -T /mount-9p | grep 9p │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
│ ssh │ functional-643455 ssh -- ls -la /mount-9p │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
│ ssh │ functional-643455 ssh cat /mount-9p/test-1763198973779415877 │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
│ ssh │ functional-643455 ssh stat /mount-9p/created-by-test │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
│ ssh │ functional-643455 ssh stat /mount-9p/created-by-pod │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
│ ssh │ functional-643455 ssh sudo umount -f /mount-9p │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
│ ssh │ functional-643455 ssh findmnt -T /mount-9p | grep 9p │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ │
│ mount │ -p functional-643455 /tmp/TestFunctionalparallelMountCmdspecific-port3528019422/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ │
│ ssh │ functional-643455 ssh findmnt -T /mount-9p | grep 9p │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
│ ssh │ functional-643455 ssh -- ls -la /mount-9p │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
│ ssh │ functional-643455 ssh sudo umount -f /mount-9p │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ │
│ mount │ -p functional-643455 /tmp/TestFunctionalparallelMountCmdVerifyCleanup751928891/001:/mount3 --alsologtostderr -v=1 │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ │
│ mount │ -p functional-643455 /tmp/TestFunctionalparallelMountCmdVerifyCleanup751928891/001:/mount2 --alsologtostderr -v=1 │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ │
│ ssh │ functional-643455 ssh findmnt -T /mount1 │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ │
│ mount │ -p functional-643455 /tmp/TestFunctionalparallelMountCmdVerifyCleanup751928891/001:/mount1 --alsologtostderr -v=1 │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ │
│ ssh │ functional-643455 ssh findmnt -T /mount1 │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
│ ssh │ functional-643455 ssh findmnt -T /mount2 │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
│ ssh │ functional-643455 ssh findmnt -T /mount3 │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
│ mount │ -p functional-643455 --kill=true │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ │
│ addons │ functional-643455 addons list │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
│ addons │ functional-643455 addons list -o json │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
│ start │ -p functional-643455 --dry-run --memory 250MB --alsologtostderr --driver=docker --container-runtime=containerd │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ │
│ start │ -p functional-643455 --dry-run --alsologtostderr -v=1 --driver=docker --container-runtime=containerd │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ │
│ start │ -p functional-643455 --dry-run --memory 250MB --alsologtostderr --driver=docker --container-runtime=containerd │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ │
│ dashboard │ --url --port 36195 -p functional-643455 --alsologtostderr -v=1 │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ │
└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/11/15 09:29:44
Running on machine: ubuntu-20-agent-13
Binary: Built with gc go1.24.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1115 09:29:44.875838 174988 out.go:360] Setting OutFile to fd 1 ...
I1115 09:29:44.875936 174988 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:29:44.875944 174988 out.go:374] Setting ErrFile to fd 2...
I1115 09:29:44.875957 174988 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:29:44.876325 174988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-124770/.minikube/bin
I1115 09:29:44.876748 174988 out.go:368] Setting JSON to false
I1115 09:29:44.877812 174988 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":15135,"bootTime":1763183850,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1115 09:29:44.877921 174988 start.go:143] virtualization: kvm guest
I1115 09:29:44.880159 174988 out.go:179] * [functional-643455] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
I1115 09:29:44.881646 174988 out.go:179] - MINIKUBE_LOCATION=21894
I1115 09:29:44.881678 174988 notify.go:221] Checking for updates...
I1115 09:29:44.884009 174988 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1115 09:29:44.885173 174988 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21894-124770/kubeconfig
I1115 09:29:44.886339 174988 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-124770/.minikube
I1115 09:29:44.887594 174988 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1115 09:29:44.888818 174988 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1115 09:29:44.890443 174988 config.go:182] Loaded profile config "functional-643455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1115 09:29:44.890911 174988 driver.go:422] Setting default libvirt URI to qemu:///system
I1115 09:29:44.915414 174988 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
I1115 09:29:44.915506 174988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1115 09:29:44.974874 174988 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-15 09:29:44.965206839 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1115 09:29:44.974982 174988 docker.go:319] overlay module found
I1115 09:29:44.976788 174988 out.go:179] * Utilisation du pilote docker basé sur le profil existant
I1115 09:29:44.978139 174988 start.go:309] selected driver: docker
I1115 09:29:44.978155 174988 start.go:930] validating driver "docker" against &{Name:functional-643455 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-643455 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1115 09:29:44.978254 174988 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1115 09:29:44.980009 174988 out.go:203]
W1115 09:29:44.981297 174988 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
I1115 09:29:44.982533 174988 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
2cda70e5609c4 56cc512116c8f 5 minutes ago Exited mount-munger 0 b835aa6cdf85b busybox-mount default
17a282b1fa4c9 5107333e08a87 5 minutes ago Running mysql 0 bb3d9dcf11838 mysql-5bb876957f-5bd4x default
bf75b4fead77d 9056ab77afb8e 5 minutes ago Running echo-server 0 0a593cd9578a6 hello-node-connect-7d85dfc575-q2qtv default
564d4fabc270f 6e38f40d628db 5 minutes ago Running storage-provisioner 2 7280b209c4a1e storage-provisioner kube-system
59b2e611066bb c80c8dbafe7dd 5 minutes ago Running kube-controller-manager 2 488ecac322c4f kube-controller-manager-functional-643455 kube-system
babc27772525c c3994bc696102 5 minutes ago Running kube-apiserver 0 26840f129c94e kube-apiserver-functional-643455 kube-system
20e7221441e30 5f1f5298c888d 5 minutes ago Running etcd 1 f97b5bca4f6a7 etcd-functional-643455 kube-system
fb18f66b9833e 409467f978b4a 6 minutes ago Running kindnet-cni 1 bfebc994070f2 kindnet-9ck6k kube-system
0eea2114b1571 fc25172553d79 6 minutes ago Running kube-proxy 1 6075525d36525 kube-proxy-nwjjp kube-system
cc9efc6fc9059 c80c8dbafe7dd 6 minutes ago Exited kube-controller-manager 1 488ecac322c4f kube-controller-manager-functional-643455 kube-system
4e1710787e24e 7dd6aaa1717ab 6 minutes ago Running kube-scheduler 1 558179c3009ad kube-scheduler-functional-643455 kube-system
dd1dfa9b2e913 6e38f40d628db 6 minutes ago Exited storage-provisioner 1 7280b209c4a1e storage-provisioner kube-system
ec35552550ecd 52546a367cc9e 6 minutes ago Running coredns 1 d3841618a2a6f coredns-66bc5c9577-gslgg kube-system
71224bce65213 52546a367cc9e 6 minutes ago Exited coredns 0 d3841618a2a6f coredns-66bc5c9577-gslgg kube-system
4c8deb830b3c4 409467f978b4a 6 minutes ago Exited kindnet-cni 0 bfebc994070f2 kindnet-9ck6k kube-system
12df49b0bdbf1 fc25172553d79 6 minutes ago Exited kube-proxy 0 6075525d36525 kube-proxy-nwjjp kube-system
81fd6e38ed44f 7dd6aaa1717ab 6 minutes ago Exited kube-scheduler 0 558179c3009ad kube-scheduler-functional-643455 kube-system
dc71f2be2fc35 5f1f5298c888d 6 minutes ago Exited etcd 0 f97b5bca4f6a7 etcd-functional-643455 kube-system
==> containerd <==
Nov 15 09:32:27 functional-643455 containerd[3858]: time="2025-11-15T09:32:27.122746937Z" level=info msg="PullImage \"docker.io/nginx:latest\""
Nov 15 09:32:27 functional-643455 containerd[3858]: time="2025-11-15T09:32:27.124921553Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Nov 15 09:32:27 functional-643455 containerd[3858]: time="2025-11-15T09:32:27.197821548Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Nov 15 09:32:27 functional-643455 containerd[3858]: time="2025-11-15T09:32:27.276792155Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Nov 15 09:32:27 functional-643455 containerd[3858]: time="2025-11-15T09:32:27.276866159Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
Nov 15 09:32:27 functional-643455 containerd[3858]: time="2025-11-15T09:32:27.277552401Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
Nov 15 09:32:27 functional-643455 containerd[3858]: time="2025-11-15T09:32:27.278929652Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Nov 15 09:32:27 functional-643455 containerd[3858]: time="2025-11-15T09:32:27.334329706Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Nov 15 09:32:27 functional-643455 containerd[3858]: time="2025-11-15T09:32:27.417802844Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Nov 15 09:32:27 functional-643455 containerd[3858]: time="2025-11-15T09:32:27.417909871Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10967"
Nov 15 09:32:31 functional-643455 containerd[3858]: time="2025-11-15T09:32:31.122293366Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
Nov 15 09:32:31 functional-643455 containerd[3858]: time="2025-11-15T09:32:31.123962933Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Nov 15 09:32:31 functional-643455 containerd[3858]: time="2025-11-15T09:32:31.181545281Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Nov 15 09:32:31 functional-643455 containerd[3858]: time="2025-11-15T09:32:31.263701819Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Nov 15 09:32:31 functional-643455 containerd[3858]: time="2025-11-15T09:32:31.263788620Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11044"
Nov 15 09:32:34 functional-643455 containerd[3858]: time="2025-11-15T09:32:34.121440311Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
Nov 15 09:32:34 functional-643455 containerd[3858]: time="2025-11-15T09:32:34.123331179Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Nov 15 09:32:34 functional-643455 containerd[3858]: time="2025-11-15T09:32:34.208225182Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Nov 15 09:32:34 functional-643455 containerd[3858]: time="2025-11-15T09:32:34.292035532Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Nov 15 09:32:34 functional-643455 containerd[3858]: time="2025-11-15T09:32:34.292089178Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10998"
Nov 15 09:32:40 functional-643455 containerd[3858]: time="2025-11-15T09:32:40.121956160Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
Nov 15 09:32:40 functional-643455 containerd[3858]: time="2025-11-15T09:32:40.123910312Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Nov 15 09:32:40 functional-643455 containerd[3858]: time="2025-11-15T09:32:40.205297259Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Nov 15 09:32:40 functional-643455 containerd[3858]: time="2025-11-15T09:32:40.287918113Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Nov 15 09:32:40 functional-643455 containerd[3858]: time="2025-11-15T09:32:40.287975556Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
==> coredns [71224bce65213776a3058b9d9b685001f8515f08b6b57cb996061ae7af3d144b] <==
maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:42898 - 36145 "HINFO IN 8984940392331241906.8485209873416469064. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045622994s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> coredns [ec35552550ecdcc3b355ec8adfa48f77638c99f67e22e267a5a5312cda6d6e69] <==
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:38627 - 823 "HINFO IN 155197805458491775.8394333180523951329. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.088504316s
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
==> describe nodes <==
Name: functional-643455
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-643455
kubernetes.io/os=linux
minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
minikube.k8s.io/name=functional-643455
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_11_15T09_27_54_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 15 Nov 2025 09:27:51 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-643455
AcquireTime: <unset>
RenewTime: Sat, 15 Nov 2025 09:34:36 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 15 Nov 2025 09:33:25 +0000 Sat, 15 Nov 2025 09:27:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 15 Nov 2025 09:33:25 +0000 Sat, 15 Nov 2025 09:27:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 15 Nov 2025 09:33:25 +0000 Sat, 15 Nov 2025 09:27:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 15 Nov 2025 09:33:25 +0000 Sat, 15 Nov 2025 09:28:10 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-643455
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863360Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863360Ki
pods: 110
System Info:
Machine ID: 608131c53731cf9698d1f7346905c52d
System UUID: 9f8a9454-d3ff-4e20-a36e-cf2efe1bcbc9
Boot ID: fbc9987d-de80-43b3-8f69-13458401c4dd
Kernel Version: 6.8.0-1043-gcp
OS Image: Debian GNU/Linux 12 (bookworm)
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.28
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (15 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-75c85bcc94-sx2nl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m24s
default hello-node-connect-7d85dfc575-q2qtv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m33s
default mysql-5bb876957f-5bd4x 600m (7%) 700m (8%) 512Mi (1%) 700Mi (2%) 5m32s
default nginx-svc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m26s
default sp-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m24s
kube-system coredns-66bc5c9577-gslgg 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 6m47s
kube-system etcd-functional-643455 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 6m53s
kube-system kindnet-9ck6k 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 6m47s
kube-system kube-apiserver-functional-643455 250m (3%) 0 (0%) 0 (0%) 0 (0%) 5m56s
kube-system kube-controller-manager-functional-643455 200m (2%) 0 (0%) 0 (0%) 0 (0%) 6m53s
kube-system kube-proxy-nwjjp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m47s
kube-system kube-scheduler-functional-643455 100m (1%) 0 (0%) 0 (0%) 0 (0%) 6m53s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m47s
kubernetes-dashboard dashboard-metrics-scraper-77bf4d6c4c-gcsp4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m1s
kubernetes-dashboard kubernetes-dashboard-855c9754f9-gq4vv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m1s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1450m (18%) 800m (10%)
memory 732Mi (2%) 920Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 6m46s kube-proxy
Normal Starting 5m50s kube-proxy
Normal NodeHasSufficientPID 6m53s kubelet Node functional-643455 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m53s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 6m53s kubelet Node functional-643455 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m53s kubelet Node functional-643455 status is now: NodeHasNoDiskPressure
Normal Starting 6m53s kubelet Starting kubelet.
Normal RegisteredNode 6m48s node-controller Node functional-643455 event: Registered Node functional-643455 in Controller
Normal NodeReady 6m36s kubelet Node functional-643455 status is now: NodeReady
Normal Starting 5m59s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 5m59s (x8 over 5m59s) kubelet Node functional-643455 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m59s (x8 over 5m59s) kubelet Node functional-643455 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m59s (x7 over 5m59s) kubelet Node functional-643455 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m59s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 5m55s node-controller Node functional-643455 event: Registered Node functional-643455 in Controller
==> dmesg <==
==> etcd [20e7221441e30ddff73a233e0fb39c7859b8cdcf308f699b3da6d4ea14757f97] <==
{"level":"warn","ts":"2025-11-15T09:28:48.547207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50440","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.555891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50460","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.561820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50474","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.568579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50502","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.574820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50516","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.580802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50546","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.587893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50572","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.593831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50594","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.601012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50602","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.617260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50620","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.623700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50634","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.629882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50642","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.637301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50672","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.643481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50676","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.649905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50698","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.656858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50710","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.663547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50724","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.669659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50742","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.676825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50752","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.682971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50768","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.689137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50794","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.707604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50816","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.714647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50834","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.721973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50856","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:28:48.772471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50872","server-name":"","error":"EOF"}
==> etcd [dc71f2be2fc35f940d08e52670de1d4a1226f5ed51724f2c27632ec3469c374d] <==
{"level":"warn","ts":"2025-11-15T09:27:50.760606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35402","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:27:50.767815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35420","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:27:50.773822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35430","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:27:50.794096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35448","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:27:50.801683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35464","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:27:50.810153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35474","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-15T09:27:50.859079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35488","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-11-15T09:28:45.349803Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2025-11-15T09:28:45.349978Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-643455","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
{"level":"error","ts":"2025-11-15T09:28:45.350141Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-11-15T09:28:45.351775Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-11-15T09:28:45.353125Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-11-15T09:28:45.353180Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"warn","ts":"2025-11-15T09:28:45.353221Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"info","ts":"2025-11-15T09:28:45.353283Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
{"level":"info","ts":"2025-11-15T09:28:45.353293Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
{"level":"warn","ts":"2025-11-15T09:28:45.353298Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"error","ts":"2025-11-15T09:28:45.353318Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"warn","ts":"2025-11-15T09:28:45.353243Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
{"level":"warn","ts":"2025-11-15T09:28:45.353342Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
{"level":"error","ts":"2025-11-15T09:28:45.353354Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-11-15T09:28:45.355660Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"error","ts":"2025-11-15T09:28:45.355740Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-11-15T09:28:45.355771Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2025-11-15T09:28:45.355789Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-643455","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
==> kernel <==
09:34:46 up 4:17, 0 user, load average: 0.12, 0.74, 1.66
Linux functional-643455 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kindnet [4c8deb830b3c40c4c2e7460472b20a8a34868b3c6ed2a2b28e8e2eb708d19b1e] <==
I1115 09:28:00.290015 1 main.go:109] connected to apiserver: https://10.96.0.1:443
I1115 09:28:00.290318 1 main.go:139] hostIP = 192.168.49.2
podIP = 192.168.49.2
I1115 09:28:00.290463 1 main.go:148] setting mtu 1500 for CNI
I1115 09:28:00.290482 1 main.go:178] kindnetd IP family: "ipv4"
I1115 09:28:00.290506 1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
time="2025-11-15T09:28:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
I1115 09:28:00.492641 1 controller.go:377] "Starting controller" name="kube-network-policies"
I1115 09:28:00.492730 1 controller.go:381] "Waiting for informer caches to sync"
I1115 09:28:00.492745 1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
I1115 09:28:00.493004 1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
I1115 09:28:00.885572 1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
I1115 09:28:00.885610 1 metrics.go:72] Registering metrics
I1115 09:28:00.885711 1 controller.go:711] "Syncing nftables rules"
I1115 09:28:10.494295 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1115 09:28:10.494381 1 main.go:301] handling current node
I1115 09:28:20.498719 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1115 09:28:20.498760 1 main.go:301] handling current node
I1115 09:28:30.497908 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1115 09:28:30.497939 1 main.go:301] handling current node
==> kindnet [fb18f66b9833e3dde538053ed3f57dd6dfdb05cb5a04a8703272118f19fe0bd1] <==
I1115 09:32:46.191198 1 main.go:301] handling current node
I1115 09:32:56.191768 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1115 09:32:56.191799 1 main.go:301] handling current node
I1115 09:33:06.194092 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1115 09:33:06.194129 1 main.go:301] handling current node
I1115 09:33:16.194770 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1115 09:33:16.194812 1 main.go:301] handling current node
I1115 09:33:26.191188 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1115 09:33:26.191228 1 main.go:301] handling current node
I1115 09:33:36.195146 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1115 09:33:36.195182 1 main.go:301] handling current node
I1115 09:33:46.199914 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1115 09:33:46.199951 1 main.go:301] handling current node
I1115 09:33:56.191660 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1115 09:33:56.191707 1 main.go:301] handling current node
I1115 09:34:06.199686 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1115 09:34:06.199727 1 main.go:301] handling current node
I1115 09:34:16.198199 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1115 09:34:16.198236 1 main.go:301] handling current node
I1115 09:34:26.191572 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1115 09:34:26.191604 1 main.go:301] handling current node
I1115 09:34:36.193216 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1115 09:34:36.193255 1 main.go:301] handling current node
I1115 09:34:46.197401 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1115 09:34:46.197436 1 main.go:301] handling current node
==> kube-apiserver [babc27772525c961baf898d5c14615a30cff6db31c2bcaed456c0b27dbbaeeb8] <==
I1115 09:28:49.239447 1 handler_discovery.go:451] Starting ResourceDiscoveryManager
I1115 09:28:49.262732 1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
I1115 09:28:50.141810 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I1115 09:28:50.268508 1 controller.go:667] quota admission added evaluator for: serviceaccounts
I1115 09:28:50.268508 1 controller.go:667] quota admission added evaluator for: serviceaccounts
I1115 09:28:50.268508 1 controller.go:667] quota admission added evaluator for: serviceaccounts
W1115 09:28:50.443490 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
I1115 09:28:50.444740 1 controller.go:667] quota admission added evaluator for: endpoints
I1115 09:28:50.449478 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1115 09:28:50.987214 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I1115 09:28:51.077837 1 controller.go:667] quota admission added evaluator for: deployments.apps
I1115 09:28:51.125716 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1115 09:28:51.137090 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1115 09:28:56.155591 1 controller.go:667] quota admission added evaluator for: replicasets.apps
I1115 09:29:08.941026 1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.120.65"}
I1115 09:29:13.445263 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.91.133"}
I1115 09:29:14.778145 1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.99.53.42"}
I1115 09:29:20.827479 1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.14.196"}
I1115 09:29:22.112518 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.96.127.55"}
E1115 09:29:28.953220 1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39926: use of closed network connection
E1115 09:29:30.359901 1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39944: use of closed network connection
E1115 09:29:32.570122 1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39958: use of closed network connection
I1115 09:29:45.819201 1 controller.go:667] quota admission added evaluator for: namespaces
I1115 09:29:45.953077 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.187.89"}
I1115 09:29:45.964682 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.163.26"}
==> kube-controller-manager [59b2e611066bb26cf54b4e22ea3bff8df16074d96a0d58f8bca35318b1d8397e] <==
I1115 09:28:51.985933 1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
I1115 09:28:51.985979 1 shared_informer.go:356] "Caches are synced" controller="GC"
I1115 09:28:51.986098 1 shared_informer.go:356] "Caches are synced" controller="attach detach"
I1115 09:28:51.986413 1 shared_informer.go:356] "Caches are synced" controller="HPA"
I1115 09:28:51.986426 1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
I1115 09:28:51.986415 1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
I1115 09:28:51.986509 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
I1115 09:28:51.987007 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
I1115 09:28:51.987074 1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
I1115 09:28:51.987524 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
I1115 09:28:51.988575 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
I1115 09:28:51.991276 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1115 09:28:51.994197 1 shared_informer.go:356] "Caches are synced" controller="node"
I1115 09:28:51.994259 1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
I1115 09:28:51.994301 1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
I1115 09:28:51.994309 1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
I1115 09:28:51.994315 1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
I1115 09:28:52.002224 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1115 09:28:52.012283 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1115 09:29:45.867793 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1115 09:29:45.871537 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1115 09:29:45.874791 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1115 09:29:45.876115 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1115 09:29:45.878691 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1115 09:29:45.884312 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
==> kube-controller-manager [cc9efc6fc9059c7ecb39bcd62cb964c9b28d22237804da685ab2e20045fee203] <==
I1115 09:28:36.434971 1 serving.go:386] Generated self-signed cert in-memory
I1115 09:28:37.179644 1 controllermanager.go:191] "Starting" version="v1.34.1"
I1115 09:28:37.179668 1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1115 09:28:37.181112 1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I1115 09:28:37.181116 1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I1115 09:28:37.181432 1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
I1115 09:28:37.181459 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
E1115 09:28:47.182963 1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
==> kube-proxy [0eea2114b1571ce0a888ea43435cf1aaf3f9357fdb10b1195e8c51c681f176e2] <==
I1115 09:28:35.926903 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
E1115 09:28:35.927913 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-643455&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1115 09:28:36.929125 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-643455&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1115 09:28:38.805776 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-643455&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1115 09:28:44.306372 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-643455&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
I1115 09:28:55.327547 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1115 09:28:55.327588 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
E1115 09:28:55.327664 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1115 09:28:55.349670 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1115 09:28:55.349735 1 server_linux.go:132] "Using iptables Proxier"
I1115 09:28:55.355434 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1115 09:28:55.355930 1 server.go:527] "Version info" version="v1.34.1"
I1115 09:28:55.355962 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1115 09:28:55.358227 1 config.go:200] "Starting service config controller"
I1115 09:28:55.358308 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1115 09:28:55.358373 1 config.go:106] "Starting endpoint slice config controller"
I1115 09:28:55.358380 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1115 09:28:55.358255 1 config.go:309] "Starting node config controller"
I1115 09:28:55.358404 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1115 09:28:55.358411 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1115 09:28:55.358694 1 config.go:403] "Starting serviceCIDR config controller"
I1115 09:28:55.358709 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1115 09:28:55.458500 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1115 09:28:55.460010 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1115 09:28:55.460039 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-proxy [12df49b0bdbf13f0052ec752866e2308cdebef7eb02aa3c3f90bad04188baeb6] <==
I1115 09:27:59.895569 1 server_linux.go:53] "Using iptables proxy"
I1115 09:27:59.967964 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1115 09:28:00.068553 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1115 09:28:00.068616 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
E1115 09:28:00.068733 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1115 09:28:00.089484 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1115 09:28:00.089545 1 server_linux.go:132] "Using iptables Proxier"
I1115 09:28:00.094834 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1115 09:28:00.095367 1 server.go:527] "Version info" version="v1.34.1"
I1115 09:28:00.095411 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1115 09:28:00.097116 1 config.go:200] "Starting service config controller"
I1115 09:28:00.097147 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1115 09:28:00.097187 1 config.go:106] "Starting endpoint slice config controller"
I1115 09:28:00.097193 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1115 09:28:00.097216 1 config.go:403] "Starting serviceCIDR config controller"
I1115 09:28:00.097304 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1115 09:28:00.097364 1 config.go:309] "Starting node config controller"
I1115 09:28:00.097375 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1115 09:28:00.097382 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1115 09:28:00.197347 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1115 09:28:00.197347 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1115 09:28:00.197618 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
==> kube-scheduler [4e1710787e24ed725efd1baf8185da908cde84b9609641698e1063153aac9e5e] <==
E1115 09:28:41.170473 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1115 09:28:41.178915 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1115 09:28:41.351173 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1115 09:28:41.407919 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1115 09:28:41.450528 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1115 09:28:43.572781 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1115 09:28:44.247390 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1115 09:28:44.491689 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1115 09:28:44.567545 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1115 09:28:44.840315 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1115 09:28:44.890860 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1115 09:28:44.891434 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1115 09:28:45.313304 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1115 09:28:45.522511 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1115 09:28:45.726307 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1115 09:28:45.758983 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1115 09:28:46.185391 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1115 09:28:46.200407 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1115 09:28:46.213130 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1115 09:28:46.246730 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1115 09:28:46.326995 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1115 09:28:46.369849 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1115 09:28:46.787615 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1115 09:28:47.489187 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
I1115 09:28:57.693703 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kube-scheduler [81fd6e38ed44f24f83e30b9f760f68608a59e45ddfa53d48e689f61dc83a06fb] <==
E1115 09:27:51.281612 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1115 09:27:51.281671 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1115 09:27:51.281741 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1115 09:27:51.281807 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1115 09:27:51.281864 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1115 09:27:51.285219 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1115 09:27:51.285423 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1115 09:27:52.086790 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1115 09:27:52.166397 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1115 09:27:52.204946 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1115 09:27:52.217198 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1115 09:27:52.244369 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1115 09:27:52.247394 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1115 09:27:52.316405 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1115 09:27:52.372527 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1115 09:27:52.468910 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1115 09:27:52.492229 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1115 09:27:52.632513 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
I1115 09:27:55.375996 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1115 09:28:35.165221 1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
I1115 09:28:35.165251 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1115 09:28:35.165269 1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
I1115 09:28:35.165363 1 server.go:263] "[graceful-termination] secure server has stopped listening"
I1115 09:28:35.165459 1 server.go:265] "[graceful-termination] secure server is exiting"
E1115 09:28:35.165486 1 run.go:72] "command failed" err="finished without leader elect"
==> kubelet <==
Nov 15 09:33:41 functional-643455 kubelet[4901]: E1115 09:33:41.121594 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sx2nl" podUID="7ca07bab-7255-4c58-9def-d033a33120e9"
Nov 15 09:33:44 functional-643455 kubelet[4901]: E1115 09:33:44.122349 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d9932061-6756-48e8-bb60-59001527b050"
Nov 15 09:33:44 functional-643455 kubelet[4901]: E1115 09:33:44.122361 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-gcsp4" podUID="bc09e4d8-f970-4eec-83b3-5662
106ad81f"
Nov 15 09:33:47 functional-643455 kubelet[4901]: E1115 09:33:47.121557 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="2393f4a7-5ffe-4821-99e3-ea6552a163f7"
Nov 15 09:33:49 functional-643455 kubelet[4901]: E1115 09:33:49.122353 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gq4vv" podUID="4e78954f-1256-46f8-8490-8d686648cde6"
Nov 15 09:33:54 functional-643455 kubelet[4901]: E1115 09:33:54.121339 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sx2nl" podUID="7ca07bab-7255-4c58-9def-d033a33120e9"
Nov 15 09:33:55 functional-643455 kubelet[4901]: E1115 09:33:55.121941 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d9932061-6756-48e8-bb60-59001527b050"
Nov 15 09:33:58 functional-643455 kubelet[4901]: E1115 09:33:58.121810 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-gcsp4" podUID="bc09e4d8-f970-4eec-83b3-5662
106ad81f"
Nov 15 09:34:00 functional-643455 kubelet[4901]: E1115 09:34:00.122380 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gq4vv" podUID="4e78954f-1256-46f8-8490-8d686648cde6"
Nov 15 09:34:01 functional-643455 kubelet[4901]: E1115 09:34:01.121726 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="2393f4a7-5ffe-4821-99e3-ea6552a163f7"
Nov 15 09:34:08 functional-643455 kubelet[4901]: E1115 09:34:08.121426 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sx2nl" podUID="7ca07bab-7255-4c58-9def-d033a33120e9"
Nov 15 09:34:08 functional-643455 kubelet[4901]: E1115 09:34:08.122242 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d9932061-6756-48e8-bb60-59001527b050"
Nov 15 09:34:09 functional-643455 kubelet[4901]: E1115 09:34:09.122462 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-gcsp4" podUID="bc09e4d8-f970-4eec-83b3-5662
106ad81f"
Nov 15 09:34:14 functional-643455 kubelet[4901]: E1115 09:34:14.122120 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gq4vv" podUID="4e78954f-1256-46f8-8490-8d686648cde6"
Nov 15 09:34:16 functional-643455 kubelet[4901]: E1115 09:34:16.121310 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="2393f4a7-5ffe-4821-99e3-ea6552a163f7"
Nov 15 09:34:21 functional-643455 kubelet[4901]: E1115 09:34:21.121321 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sx2nl" podUID="7ca07bab-7255-4c58-9def-d033a33120e9"
Nov 15 09:34:21 functional-643455 kubelet[4901]: E1115 09:34:21.122179 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d9932061-6756-48e8-bb60-59001527b050"
Nov 15 09:34:22 functional-643455 kubelet[4901]: E1115 09:34:22.121839 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-gcsp4" podUID="bc09e4d8-f970-4eec-83b3-5662
106ad81f"
Nov 15 09:34:28 functional-643455 kubelet[4901]: E1115 09:34:28.122144 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gq4vv" podUID="4e78954f-1256-46f8-8490-8d686648cde6"
Nov 15 09:34:29 functional-643455 kubelet[4901]: E1115 09:34:29.121233 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="2393f4a7-5ffe-4821-99e3-ea6552a163f7"
Nov 15 09:34:33 functional-643455 kubelet[4901]: E1115 09:34:33.122170 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-gcsp4" podUID="bc09e4d8-f970-4eec-83b3-5662
106ad81f"
Nov 15 09:34:35 functional-643455 kubelet[4901]: E1115 09:34:35.121782 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d9932061-6756-48e8-bb60-59001527b050"
Nov 15 09:34:36 functional-643455 kubelet[4901]: E1115 09:34:36.121688 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sx2nl" podUID="7ca07bab-7255-4c58-9def-d033a33120e9"
Nov 15 09:34:43 functional-643455 kubelet[4901]: E1115 09:34:43.121809 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gq4vv" podUID="4e78954f-1256-46f8-8490-8d686648cde6"
Nov 15 09:34:44 functional-643455 kubelet[4901]: E1115 09:34:44.121669 4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="2393f4a7-5ffe-4821-99e3-ea6552a163f7"
==> storage-provisioner [564d4fabc270f8233361e6322badd95ab1ccf27337c2f9b7a77f6c63013f1f9b] <==
W1115 09:34:21.128084 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:23.131312 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:23.135196 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:25.137969 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:25.142859 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:27.146402 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:27.150000 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:29.152788 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:29.156340 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:31.159838 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:31.163908 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:33.167349 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:33.171763 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:35.175312 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:35.178838 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:37.182268 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:37.187098 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:39.190249 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:39.194144 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:41.196751 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:41.200437 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:43.203886 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:43.207792 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:45.211264 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:34:45.216613 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
==> storage-provisioner [dd1dfa9b2e913da162f5d62e0505a76b211080b8cab95935e800b19c395cad29] <==
I1115 09:28:35.787220 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1115 09:28:35.792470 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-643455 -n functional-643455
helpers_test.go:269: (dbg) Run: kubectl --context functional-643455 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-sx2nl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-gcsp4 kubernetes-dashboard-855c9754f9-gq4vv
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context functional-643455 describe pod busybox-mount hello-node-75c85bcc94-sx2nl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-gcsp4 kubernetes-dashboard-855c9754f9-gq4vv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-643455 describe pod busybox-mount hello-node-75c85bcc94-sx2nl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-gcsp4 kubernetes-dashboard-855c9754f9-gq4vv: exit status 1 (107.183694ms)
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-643455/192.168.49.2
Start Time: Sat, 15 Nov 2025 09:29:35 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Succeeded
IP: 10.244.0.9
IPs:
IP: 10.244.0.9
Containers:
mount-munger:
Container ID: containerd://2cda70e5609c48560b543ec240ea7b5b6dfeb79dc264b8dd459d7bba2c5947ff
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID: gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 15 Nov 2025 09:29:37 +0000
Finished: Sat, 15 Nov 2025 09:29:37 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sgv2q (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-sgv2q:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m12s default-scheduler Successfully assigned default/busybox-mount to functional-643455
Normal Pulling 5m11s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal Pulled 5m10s kubelet Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.562s (1.562s including waiting). Image size: 2395207 bytes.
Normal Created 5m10s kubelet Created container: mount-munger
Normal Started 5m10s kubelet Started container mount-munger
Name: hello-node-75c85bcc94-sx2nl
Namespace: default
Priority: 0
Service Account: default
Node: functional-643455/192.168.49.2
Start Time: Sat, 15 Nov 2025 09:29:22 +0000
Labels: app=hello-node
pod-template-hash=75c85bcc94
Annotations: <none>
Status: Pending
IP: 10.244.0.7
IPs:
IP: 10.244.0.7
Controlled By: ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:
Image: kicbase/echo-server
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j5rgj (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-j5rgj:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m25s default-scheduler Successfully assigned default/hello-node-75c85bcc94-sx2nl to functional-643455
Normal Pulling 2m13s (x5 over 5m25s) kubelet Pulling image "kicbase/echo-server"
Warning Failed 2m13s (x5 over 5m24s) kubelet Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 2m13s (x5 over 5m24s) kubelet Error: ErrImagePull
Normal BackOff 11s (x20 over 5m24s) kubelet Back-off pulling image "kicbase/echo-server"
Warning Failed 11s (x20 over 5m24s) kubelet Error: ImagePullBackOff
Name: nginx-svc
Namespace: default
Priority: 0
Service Account: default
Node: functional-643455/192.168.49.2
Start Time: Sat, 15 Nov 2025 09:29:20 +0000
Labels: run=nginx-svc
Annotations: <none>
Status: Pending
IP: 10.244.0.6
IPs:
IP: 10.244.0.6
Containers:
nginx:
Container ID:
Image: docker.io/nginx:alpine
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lrftf (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-lrftf:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m27s default-scheduler Successfully assigned default/nginx-svc to functional-643455
Warning Failed 5m25s kubelet Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal Pulling 2m20s (x5 over 5m26s) kubelet Pulling image "docker.io/nginx:alpine"
Warning Failed 2m20s (x5 over 5m25s) kubelet Error: ErrImagePull
Warning Failed 2m20s (x4 over 5m12s) kubelet Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal BackOff 12s (x21 over 5m24s) kubelet Back-off pulling image "docker.io/nginx:alpine"
Warning Failed 12s (x21 over 5m24s) kubelet Error: ImagePullBackOff
Name: sp-pod
Namespace: default
Priority: 0
Service Account: default
Node: functional-643455/192.168.49.2
Start Time: Sat, 15 Nov 2025 09:29:22 +0000
Labels: test=storage-provisioner
Annotations: <none>
Status: Pending
IP: 10.244.0.8
IPs:
IP: 10.244.0.8
Containers:
myfrontend:
Container ID:
Image: docker.io/nginx
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p5kkb (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mypd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: myclaim
ReadOnly: false
kube-api-access-p5kkb:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m25s default-scheduler Successfully assigned default/sp-pod to functional-643455
Normal Pulling 2m20s (x5 over 5m24s) kubelet Pulling image "docker.io/nginx"
Warning Failed 2m20s (x5 over 5m24s) kubelet Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 2m20s (x5 over 5m24s) kubelet Error: ErrImagePull
Warning Failed 18s (x20 over 5m23s) kubelet Error: ImagePullBackOff
Normal BackOff 3s (x21 over 5m23s) kubelet Back-off pulling image "docker.io/nginx"
-- /stdout --
** stderr **
Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-gcsp4" not found
Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-gq4vv" not found
** /stderr **
helpers_test.go:287: kubectl --context functional-643455 describe pod busybox-mount hello-node-75c85bcc94-sx2nl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-gcsp4 kubernetes-dashboard-855c9754f9-gq4vv: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.22s)